Sandra Wachter's Avatar

Sandra Wachter

@swachter.bsky.social

Professor of Technology and Regulation, Oxford Internet Institute, University of Oxford https://tinyurl.com/3rkmbmsf Humboldt Professor of Technology & Regulation, Hasso Plattner Institute https://tinyurl.com/47rkrt6c Governance of Emerging Technologies

2,533 Followers  |  174 Following  |  78 Posts  |  Joined: 13.11.2024  |  2.1505

Latest posts by swachter.bsky.social on Bluesky

My interview @newsweek.com with Marni Rose McFall

@oii.ox.ac.uk @socsci.ox.ac.uk @oxfordlawfac.bsky.social

01.12.2025 09:48 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Two versions of same photo spark online alarm: 'It is so over' Googleโ€™s new Nano Banana Pro is making waves online, and people are declaring the end of traditional photography.

I find it dystopian to claim the era of traditional photography is โ€œover.โ€It basically saysโ€œwhy capture real genuine human moments if you could just generate them on your computer? I am not sure why we should celebrate the death of art & artists & see this as a sign of progress tinyurl.com/2s4e6ede

01.12.2025 09:46 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Thanks @newsweek.com Hugh Cameron 4 ft my work w/
@bmittelstadt.bsky.social & @cruss.bsky.social
on GenAI, hallucinations & โ€œcareless speechโ€ www.newsweek.com/google-boss-...

21.11.2025 08:46 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks @newsweek.com Hugh Cameron 4 ft my work w/
@bmittelstadt.bsky.social & @cruss.bsky.social
on GenAI, hallucinations & โ€œcareless speechโ€ www.newsweek.com/google-boss-...

21.11.2025 08:46 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

GenAI is extremely prone to hallucinate. If people use LLMs to ask questions they donโ€™t know the answer to, how would they be able to spot a mistake? And now think what can happen if we implement these systems in areas where truth and detail matter e.g. investment decisions, stock market or medicine

21.11.2025 08:41 โ€” ๐Ÿ‘ 8    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Keynote: The Power of AI: From Tech Ideologies to New Authoritarianisms. 20.11.2025, 18:300-20:00 h, Weizenbaum Institute, HardenbergstraรŸe 32, 10623 Berlin

Keynote: The Power of AI: From Tech Ideologies to New Authoritarianisms. 20.11.2025, 18:300-20:00 h, Weizenbaum Institute, HardenbergstraรŸe 32, 10623 Berlin

โš ๏ธ AI is seen by many as a promise of salvation for humanity. But what if this technology of the future hides authoritarian fantasies of power behind its shiny facade? Join this keynote by Rainer Mรผhlhoff (Uni Osnabrรผck) ๐Ÿ‘‰ Register now: buff.ly/DNNizzn

04.11.2025 15:01 โ€” ๐Ÿ‘ 24    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Preview
Trump Administration Providing Weapons Grade Plutonium to Sam Altman The White House is providing plutonium to Sam Altman's Oklo, one of four US companies chosen to test experimental reactor designs.

HT @cruss.bsky.social

futurism.com/science-ener...

31.10.2025 08:45 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
LinkedIn This link will take you to a page thatโ€™s not on LinkedIn

Full papers here w @bmittelstadt.bsky.social @cruss.bsky.social @oii.ox.ac.uk @socsci.ox.ac.uk

Do large language models have a legal duty to tell the truth? lnkd.in/ejqW4nnB

To protect science, we must use LLMs as zero-shot translators lnkd.in/etZRbSh5

27.10.2025 09:40 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
The human line: safeguarding rights and democracy in the AI era Strasbourg 20/10/2025

Super excited to see my work on GenAI & subtle hallucinations or what we coin โ€œcareless speechโ€ cited in @coe.int report. We need to think about the cumulative, long-term risks of โ€œcareless speechโ€ to science, education, media & shared social truth in democratic societies.

tinyurl.com/44pjvrjp

27.10.2025 09:39 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

So that's teaching wrapped up for another year, next class start of March. Marking then writing then Kenya then writing... Will the bubble burst while I am drafting?

23.10.2025 08:05 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
Pregnant women describe miscarrying and bleeding out while in ICE custody, advocates say The ACLU and other groups are pressing for ICE to identify and release all pregnant women in custody and to stop detaining anyone known to be pregnant, postpartum or nursing.

โ€œMedical staff did not give her any food, water, or pain medication for several hours. Much later that evening, after a significant loss of blood, Lucia was transported to an emergency room approximately an hour away, with her arms and legs shackled.โ€ www.nbcnews.com/news/us-news...

23.10.2025 02:39 โ€” ๐Ÿ‘ 2284    ๐Ÿ” 1643    ๐Ÿ’ฌ 157    ๐Ÿ“Œ 191
Preview
Scientists Completed a Toxicity Report on This Forever Chemical. The EPA Hasnโ€™t Released It. Agency scientists found that PFNA could cause developmental, liver and reproductive harms. Their final report was ready in mid-April, according to an internal document reviewed by ProPublica, but the ...

EPA scientists linked PFNA with developmental, liver and reproductive harms.

Their final report was ready in mid-April, according to an internal document reviewed by ProPublica, but it has yet to be released by the Trump administration.

By @fastlerner.bsky.social

22.10.2025 16:26 โ€” ๐Ÿ‘ 317    ๐Ÿ” 159    ๐Ÿ’ฌ 9    ๐Ÿ“Œ 15
Preview
Scientists Increasingly Using AI to Help Write Papersโ€”for Better or Worse

Super excited to see my work on the dangers of โ€œcareless speechโ€, subtle hallucinations & GenAI for science, academia & education or any areas where truth & detail matter w/ @bmittelstadt.bsky.social @cruss.bsky.social ft @elsevierconnect.bsky.social Mitch Leslie
tinyurl.com/4ms2mkub

22.10.2025 06:26 โ€” ๐Ÿ‘ 8    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

@oii.ox.ac.uk @socsci.ox.ac.uk @oxfordlawfac.bsky.social

22.10.2025 06:27 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Do large language models have a legal duty to tell the truth? | Royal Society Open Science Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce ...

Do large language models have a legal duty to tell the truth? royalsocietypublishing.org/doi/10.1098/...
To protect science, we must use LLMs as zero-shot translators www.nature.com/articles/s41...

22.10.2025 06:27 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Scientists Increasingly Using AI to Help Write Papersโ€”for Better or Worse

Super excited to see my work on the dangers of โ€œcareless speechโ€, subtle hallucinations & GenAI for science, academia & education or any areas where truth & detail matter w/ @bmittelstadt.bsky.social @cruss.bsky.social ft @elsevierconnect.bsky.social Mitch Leslie
tinyurl.com/4ms2mkub

22.10.2025 06:26 โ€” ๐Ÿ‘ 8    ๐Ÿ” 6    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Preview
Why AI could make people more likely to lie A new study has revealed that people feel much more comfortable being deceitful when using AI

Prof @swachter.bsky.social @oii.ox.ac.uk comments about the worrying consequences that can arise if people are more likely to engage in unethical behaviour when using AI.

Read @the-independent.com article: โฌ‡๏ธ

www.independent.co.uk/news/uk/home...

16.10.2025 15:22 โ€” ๐Ÿ‘ 4    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Bank of England Warns of Impending AI Disaster The Bank of England has sounded the alarm, warning of an intensifying risk of a "sudden correction" due to an AI spending frenzy.

โ€œConcerns over an AI bubble bursting have grown lately, with analysts recently finding that itโ€™s 17 times the size of the dotcom-era bubble and four times bigger than the 2008 financial crisis.โ€

Hang onto your butts. This โ€œcorrectionโ€ is gonna hurt.
futurism.com/artificial-i...

10.10.2025 03:45 โ€” ๐Ÿ‘ 1053    ๐Ÿ” 456    ๐Ÿ’ฌ 42    ๐Ÿ“Œ 226

intereting work!

20.09.2025 05:37 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Why AI could make people more likely to lie A new study has revealed that people feel much more comfortable being deceitful when using AI

Why AI could make people more likely to lie

Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social

www.independent.co.uk/news/uk/home...

18.09.2025 16:38 โ€” ๐Ÿ‘ 9    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
Do large language models have a legal duty to tell the truth? | Royal Society Open Science Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce ...

full paper here: royalsocietypublishing.org/doi/10.1098/...

15.09.2025 05:07 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
How chatbots are changing the internet As artificial and human intelligence becomes harder to tell apart, do we need new rules of engagement?

LLMs produce responses that are plausible but that contain factual inaccuracies. Its time for accountability! Precedents are established that companies are liable for answers they provide eg 2013 German Google case thx @financialtimes.com @johnthornhill.bsky.social for ft my work on.ft.com/46deNjr

15.09.2025 05:06 โ€” ๐Ÿ‘ 13    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
AI models are struggling to identify hate speech, study finds A new study has found that AI content moderators are evaluating statements of hate speech differently which is a โ€œcritical issue for the publicโ€, according to the researcher

It is unsurprising to me that models have different results but it doesnโ€™t make the harm go away.

GenAI is a popular tool for people to inform themselves, tech companies have a responsibility to ensure that their content is not harmful. With big tech comes big responsibility tinyurl.com/3zwwnr7y

17.09.2025 05:04 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Thanks to the Independent & Harriette Boucher for including me, @oii.ox.ac.uk @socsci.ox.ac.uk

17.09.2025 05:06 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
AI models are struggling to identify hate speech, study finds A new study has found that AI content moderators are evaluating statements of hate speech differently which is a โ€œcritical issue for the publicโ€, according to the researcher

It is unsurprising to me that models have different results but it doesnโ€™t make the harm go away.

GenAI is a popular tool for people to inform themselves, tech companies have a responsibility to ensure that their content is not harmful. With big tech comes big responsibility tinyurl.com/3zwwnr7y

17.09.2025 05:04 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Recruitment

I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/

16.09.2025 04:34 โ€” ๐Ÿ‘ 2    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

New! Prof @swachter.bsky.social, @oii.ox.ac.uk explains how AI chatbots donโ€™t always speak the truth and why we all need to more vigilant in distinguishing fact from fiction. Read the full @financialtimes.com article by @johnthornhill.bsky.social: bit.ly/47KQhHA.

15.09.2025 15:18 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Do large language models have a legal duty to tell the truth? | Royal Society Open Science Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce ...

full paper here: royalsocietypublishing.org/doi/10.1098/...

15.09.2025 05:07 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@oii.ox.ac.uk @bmittelstadt.bsky.social @cruss.bsky.social

15.09.2025 05:07 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
How chatbots are changing the internet As artificial and human intelligence becomes harder to tell apart, do we need new rules of engagement?

LLMs produce responses that are plausible but that contain factual inaccuracies. Its time for accountability! Precedents are established that companies are liable for answers they provide eg 2013 German Google case thx @financialtimes.com @johnthornhill.bsky.social for ft my work on.ft.com/46deNjr

15.09.2025 05:06 โ€” ๐Ÿ‘ 13    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@swachter is following 20 prominent accounts