Finally, some data on trust. It's difficult to interpret data on trust in gen AI systems at the moment because there is limited public awareness - with the exception of ChatGPT, 50% or more have not even heard of them.
08.10.2025 07:23 — 👍 1 🔁 1 💬 2 📌 0
No surprises that there's an age gap in the use of gen AI, with use more widespread among younger people.
But this only applies to standalone systems like ChatGPT. For AI embedded in other products, like Meta AI and Copilot, there's no age gap because the host product is used by people of all ages.
08.10.2025 07:23 — 👍 1 🔁 1 💬 1 📌 0
On the use of specific systems, ChatGPT is still the most widely used (22% weekly) - ahead of Gemini (11%), Meta AI (9%) and Copilot (6%).
Worth remembering that the public's use of a lot of the tools favoured by experts, like Claude and Perplexity, is still very marginal - 1% weekly.
08.10.2025 07:23 — 👍 1 🔁 1 💬 1 📌 0
A thread on how people's use of generative AI has changed in the last year - based on survey data from 6 countries (🇬🇧🇺🇸🇫🇷🇩🇰🇯🇵🇦🇷 ).
First, gen AI use has grown rapidly.
Most people have tried out gen AI at least once (61%), and 34% now use it on a weekly basis - roughly doubling from 18% a year ago.
08.10.2025 07:23 — 👍 1 🔁 5 💬 2 📌 0
✨🤖 Check out our new research on AI use around news and information and attitudes towards AI in society and journalism – with @rasmuskleis.bsky.social & @richardfletcher.bsky.social
07.10.2025 15:54 — 👍 4 🔁 1 💬 0 📌 0
Finally, as search engines increasingly integrate AI generated answers, we asked about trust in these – the trust scores are high across the board, with higher net positives than any of the standalone tools.
All this and more in the report here reutersinstitute.politics.ox.ac.uk/generative-a... 3/3
07.10.2025 09:18 — 👍 1 🔁 1 💬 0 📌 0
Asked whether they trust different AI tools, the picture is very differentiated, with net positive trust scores for e.g. ChatGPT, Google Gemini, and Microsoft Copilot, but negatives for those that are seen as part of various social media companies 2/3
07.10.2025 09:18 — 👍 1 🔁 1 💬 1 📌 0
How do people think different sectors’ use of generative AI will change their experience of interacting with them?
More optimists than pessimists for e.g. science and search engines, but more pessimists than optimists for news media, government, and – especially – politicians 1/3
07.10.2025 09:18 — 👍 8 🔁 9 💬 1 📌 2
And ppl can still think something is "good" or "good enough", even if they don't fully trust it I guess. But we barely scratched the surface here, lots of follow up questions to be asked.
07.10.2025 09:04 — 👍 0 🔁 0 💬 1 📌 0
As for your point on use: Well, the other features still work so seeing an AI-generated overview that you don't think is great will not deter you from using the rest of Google Search (apart from the fact that it's baked into so much infrastructure, so convenience argument applies, too).
07.10.2025 09:04 — 👍 0 🔁 0 💬 1 📌 0
Thanks so much, Andy! I think seeing relatives use it (but also seeing myself use it) helped anchor my expectations here. "We (you and me)" are less trusting because we study this stuff all day long. And these answers can be correct and ppl can see that, so good reason to trust them some of the time
07.10.2025 09:04 — 👍 0 🔁 0 💬 1 📌 0
Thanks, Hannes!
07.10.2025 08:49 — 👍 1 🔁 0 💬 1 📌 0
Thank you, David :-) @richardfletcher.bsky.social will be be able to confirm, but I'm afraid the answer is "No" given the way we asked. Given the primacy of the big systems, one could reasonably speculate it will mostly be one of them, however.
(And yes, read the report ;-)
07.10.2025 08:48 — 👍 1 🔁 0 💬 0 📌 0
I am also thankful to Masaharu Ban, Gretel Kahn, Priscille Biehlmann, Magnus Bredsdorff, and Tania Montalvo for their advice on the translations & @mitalilive.bsky.social, Kate Hanneford-Smith, Alex Reid, @eduardosuarez.bsky.social, Rebecca Edwards for helping to move this project forward.
07.10.2025 07:01 — 👍 1 🔁 0 💬 0 📌 0
@michelledisser.bsky.social helped with red-teaming the questionnaire and the interpretation of some of the results.
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
A big thank you to my co-authors @richardfletcher.bsky.social & @rasmuskleis.bsky.social for all their hard work. Caryhs Innes, Xhoana Beqiri, and the team at @yougov.co.uk were invaluable in fielding the survey and the @reutersinstitute.bsky.social research team helped with approaching the topic.
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
So much for now. We will be diving deeper into this across our newsletter and I will try and highlight some specific areas. All of the above and more can be found in the full 60 page report.
Here is the link again: reutersinstitute.politics.ox.ac.uk/generative-a...
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
Likewise, and perhaps again not popular new for many outlets, seeing AI labelling on news is infrequent relative to daily news use. Only 19% see AI labels daily and 28% weekly – a low number considering that 77% say they use news daily.
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
And despite a growing number of outlets introducing audience-facing AI, most people don’t yet recall seeing these AI features. Sixty per cent say they do not regularly see AI features on news sites or apps. Most common are AI summaries (19%) and AI chatbots (16%).
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
People have limited confidence in human oversight of AI in news. Only 33% think journalists ‘always’ or ‘often’ check AI outputs before publishing. Trust in news strongly correlates: 57% of those who ‘strongly trust’ news think such checks happen (19% strong distrusters).
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
… but there is also the view that AI will make news less transparent (−8) and less trustworthy (−19); and these views have hardened since 2024 (we saw no decreases).
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
People continue to have mixed expectations about what AI will do to news. As with last year, many assume AI will make news cheaper to produce (+39 percentage point difference between those that said more and those that said less) and more up to date (+22)…
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
Mind you, this is self-reported and people might not always know that AI has been in used in something they are consuming, but it shows that the public is sceptical of the use in news (but as the report shows, it depends on the context).
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
Finally, looking at AI in the news, there is a clear ‘comfort gap’ between AI- and human-led news. On average, only 12% say they are comfortable with news made entirely by AI; (21% with a ‘human in the loop’, 43% when a human leads, 62% for entirely human-made news).
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
Only three sectors see the pessimists outnumber the optimists when it comes to AI and its use by these actors – news media, government, and, especially, politicians and political parties…
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
On AI in society findings, Rasmus will have more to say on this but generally, there are more optimists than pessimists when it comes to AI – especially for AI use in sectors like healthcare, science, and search engines (!).
07.10.2025 07:01 — 👍 0 🔁 0 💬 1 📌 0
Among those who have encountered AI answers, 50% say they trust them. Respondents emphasised their speed and convenience and the fact that AI aggregates vast amounts of information as reasons to trust them, although trust seems conditional.
07.10.2025 07:01 — 👍 1 🔁 1 💬 1 📌 1
Self-reported click-through behaviour is mixed. Among those who saw AI answers, 33% say they always/often click links in AI overviews, 37% say they do so sometimes, and 28% rarely or never click through. Younger people & those who trust AI search are more likely to say they do
07.10.2025 07:00 — 👍 0 🔁 0 💬 1 📌 0
Member of Technical Staff @OpenAI | Teaching the machines to learn from humans.
Google Scholar: https://scholar.google.com/citations?user=BH7jpGIAAAAJ&hl=en
Architecture and design critic of the @theguardian.com
oliver.wainwright@theguardian.com
https://www.theguardian.com/profile/oliver-wainwright
Shorenstein Fellow, @shorensteinctr.bsky.social and Associate Director @open-society.bsky.social. Noodling on AI and info ecosystems.
Building personalized Bluesky feeds for academics! Pin Paper Skygest, which serves posts about papers from accounts you're following: https://bsky.app/profile/paper-feed.bsky.social/feed/preprintdigest. By @sjgreenwood.bsky.social and @nkgarg.bsky.social
PhD student at Mila interested in AI, cognitive neuroscience, and consciousness
CEO of Microsoft AI | Author: The Coming Wave | Formerly co-founder at Inflection AI, DeepMind
Professor, LSE. Philosophy of science, animal consciousness, animal ethics. Director of The Jeremy Coller Centre for Animal Sentience.
Journalist, currently at The New York Times. I cover privacy, technology, A.I., and the strange times we live in. Named after the Led Zeppelin song. Author of YOUR FACE BELONGS TO US. (Yes, in my head it will always be All Your Face Are Belong To Us)
Bloomberg Distinguished Professor (SAIS & Carey Business School), Johns Hopkins University. Political economy, Brazil, and a little bit of futebol.
Associate Professor in Political Behaviour at the LSE. I like campaigns and do experiments. http://www.florianfoos.net
Jonathan Wolff,
Political Philosopher.
Fellow British Academy
Emeritus Professor Blavatnik School of Government, Oxford
Wolfson College
President The Royal Institute of Philosophy
THFC supporter in 'early season false hope’ mode.
#academic
He / Him | Senior Scientist in Political Communication at @univie.ac.at | #AUTNES 🇦🇹 | #ACPP 💉 | #MEDem 🗳️ @medem.bsky.social.
Posts in 🇦🇹/🇺🇸 about Research, Elections, Public Health, & (Social)Media – sometimes BoardGames 🎲 or Birds 🐦
Director, Knight First Amendment Institute at Columbia University; Exec Editor, Just Security; former ACLU. knightcolumbia.org.
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / 社会科学
Information Sciences and English, UIUC. Distant Horizons (Chicago, 2019). tedunderwood.com
Run @whotargets.me and am a Partner @jointogether.online. Used to play drums in Fridge. Live in Cork, from London.
We connect and spark policy action among those crucial to making our interconnected world accessible, safe, and inclusive — both online and off. @aspeninstitute.bsky.social
Incoming Assistant Professor @ University of Cambridge.
Responsible AI. Human-AI Collaboration. Interactive Evaluation.
umangsbhatt.github.io