My interview @newsweek.com with Marni Rose McFall
@oii.ox.ac.uk @socsci.ox.ac.uk @oxfordlawfac.bsky.social
@swachter.bsky.social
Professor of Technology and Regulation, Oxford Internet Institute, University of Oxford https://tinyurl.com/3rkmbmsf Humboldt Professor of Technology & Regulation, Hasso Plattner Institute https://tinyurl.com/47rkrt6c Governance of Emerging Technologies
My interview @newsweek.com with Marni Rose McFall
@oii.ox.ac.uk @socsci.ox.ac.uk @oxfordlawfac.bsky.social
I find it dystopian to claim the era of traditional photography is โover.โIt basically saysโwhy capture real genuine human moments if you could just generate them on your computer? I am not sure why we should celebrate the death of art & artists & see this as a sign of progress tinyurl.com/2s4e6ede
01.12.2025 09:46 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 1Thanks @newsweek.com Hugh Cameron 4 ft my work w/
@bmittelstadt.bsky.social & @cruss.bsky.social
on GenAI, hallucinations & โcareless speechโ www.newsweek.com/google-boss-...
Thanks @newsweek.com Hugh Cameron 4 ft my work w/
@bmittelstadt.bsky.social & @cruss.bsky.social
on GenAI, hallucinations & โcareless speechโ www.newsweek.com/google-boss-...
GenAI is extremely prone to hallucinate. If people use LLMs to ask questions they donโt know the answer to, how would they be able to spot a mistake? And now think what can happen if we implement these systems in areas where truth and detail matter e.g. investment decisions, stock market or medicine
21.11.2025 08:41 โ ๐ 8 ๐ 4 ๐ฌ 1 ๐ 1Keynote: The Power of AI: From Tech Ideologies to New Authoritarianisms. 20.11.2025, 18:300-20:00 h, Weizenbaum Institute, Hardenbergstraรe 32, 10623 Berlin
โ ๏ธ AI is seen by many as a promise of salvation for humanity. But what if this technology of the future hides authoritarian fantasies of power behind its shiny facade? Join this keynote by Rainer Mรผhlhoff (Uni Osnabrรผck) ๐ Register now: buff.ly/DNNizzn
04.11.2025 15:01 โ ๐ 24 ๐ 8 ๐ฌ 2 ๐ 1HT @cruss.bsky.social
futurism.com/science-ener...
Full papers here w @bmittelstadt.bsky.social @cruss.bsky.social @oii.ox.ac.uk @socsci.ox.ac.uk
Do large language models have a legal duty to tell the truth? lnkd.in/ejqW4nnB
To protect science, we must use LLMs as zero-shot translators lnkd.in/etZRbSh5
Super excited to see my work on GenAI & subtle hallucinations or what we coin โcareless speechโ cited in @coe.int report. We need to think about the cumulative, long-term risks of โcareless speechโ to science, education, media & shared social truth in democratic societies.
tinyurl.com/44pjvrjp
So that's teaching wrapped up for another year, next class start of March. Marking then writing then Kenya then writing... Will the bubble burst while I am drafting?
23.10.2025 08:05 โ ๐ 2 ๐ 1 ๐ฌ 2 ๐ 0โMedical staff did not give her any food, water, or pain medication for several hours. Much later that evening, after a significant loss of blood, Lucia was transported to an emergency room approximately an hour away, with her arms and legs shackled.โ www.nbcnews.com/news/us-news...
23.10.2025 02:39 โ ๐ 2284 ๐ 1643 ๐ฌ 157 ๐ 191EPA scientists linked PFNA with developmental, liver and reproductive harms.
Their final report was ready in mid-April, according to an internal document reviewed by ProPublica, but it has yet to be released by the Trump administration.
By @fastlerner.bsky.social
Super excited to see my work on the dangers of โcareless speechโ, subtle hallucinations & GenAI for science, academia & education or any areas where truth & detail matter w/ @bmittelstadt.bsky.social @cruss.bsky.social ft @elsevierconnect.bsky.social Mitch Leslie
tinyurl.com/4ms2mkub
@oii.ox.ac.uk @socsci.ox.ac.uk @oxfordlawfac.bsky.social
22.10.2025 06:27 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Do large language models have a legal duty to tell the truth? royalsocietypublishing.org/doi/10.1098/...
To protect science, we must use LLMs as zero-shot translators www.nature.com/articles/s41...
Super excited to see my work on the dangers of โcareless speechโ, subtle hallucinations & GenAI for science, academia & education or any areas where truth & detail matter w/ @bmittelstadt.bsky.social @cruss.bsky.social ft @elsevierconnect.bsky.social Mitch Leslie
tinyurl.com/4ms2mkub
Prof @swachter.bsky.social @oii.ox.ac.uk comments about the worrying consequences that can arise if people are more likely to engage in unethical behaviour when using AI.
Read @the-independent.com article: โฌ๏ธ
www.independent.co.uk/news/uk/home...
โConcerns over an AI bubble bursting have grown lately, with analysts recently finding that itโs 17 times the size of the dotcom-era bubble and four times bigger than the 2008 financial crisis.โ
Hang onto your butts. This โcorrectionโ is gonna hurt.
futurism.com/artificial-i...
intereting work!
20.09.2025 05:37 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Why AI could make people more likely to lie
Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social
www.independent.co.uk/news/uk/home...
full paper here: royalsocietypublishing.org/doi/10.1098/...
15.09.2025 05:07 โ ๐ 2 ๐ 2 ๐ฌ 0 ๐ 0LLMs produce responses that are plausible but that contain factual inaccuracies. Its time for accountability! Precedents are established that companies are liable for answers they provide eg 2013 German Google case thx @financialtimes.com @johnthornhill.bsky.social for ft my work on.ft.com/46deNjr
15.09.2025 05:06 โ ๐ 13 ๐ 5 ๐ฌ 1 ๐ 1It is unsurprising to me that models have different results but it doesnโt make the harm go away.
GenAI is a popular tool for people to inform themselves, tech companies have a responsibility to ensure that their content is not harmful. With big tech comes big responsibility tinyurl.com/3zwwnr7y
Thanks to the Independent & Harriette Boucher for including me, @oii.ox.ac.uk @socsci.ox.ac.uk
17.09.2025 05:06 โ ๐ 2 ๐ 1 ๐ฌ 0 ๐ 0It is unsurprising to me that models have different results but it doesnโt make the harm go away.
GenAI is a popular tool for people to inform themselves, tech companies have a responsibility to ensure that their content is not harmful. With big tech comes big responsibility tinyurl.com/3zwwnr7y
I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/
New! Prof @swachter.bsky.social, @oii.ox.ac.uk explains how AI chatbots donโt always speak the truth and why we all need to more vigilant in distinguishing fact from fiction. Read the full @financialtimes.com article by @johnthornhill.bsky.social: bit.ly/47KQhHA.
15.09.2025 15:18 โ ๐ 7 ๐ 2 ๐ฌ 0 ๐ 0full paper here: royalsocietypublishing.org/doi/10.1098/...
15.09.2025 05:07 โ ๐ 2 ๐ 2 ๐ฌ 0 ๐ 0@oii.ox.ac.uk @bmittelstadt.bsky.social @cruss.bsky.social
15.09.2025 05:07 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0LLMs produce responses that are plausible but that contain factual inaccuracies. Its time for accountability! Precedents are established that companies are liable for answers they provide eg 2013 German Google case thx @financialtimes.com @johnthornhill.bsky.social for ft my work on.ft.com/46deNjr
15.09.2025 05:06 โ ๐ 13 ๐ 5 ๐ฌ 1 ๐ 1