Promising Topics for US–China Dialogues on AI Risks and Governance | Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
At FAccT today? Hear from @oii.ox.ac.uk DPhil student @lujain.bsky.social presenting her co-authored research paper ‘Promising Topics for U.S.–China Dialogues on AI Risks and Governance’ in the AI Regulation session, 11.09am today. #FAccT2025
Read the paper:
dl.acm.org/doi/10.1145/...
25.06.2025 09:17 — 👍 4 🔁 2 💬 0 📌 0
Towards Interactive Evaluations for Interaction Harms in Human-AI Systems
In the latest essay in our AI & Democratic Freedoms series, @lujain.bsky.social, @saffron.bsky.social, @umangsbhatt.bsky.social, Lama Ahmad, and Markus Anderljung propose a new AI evaluation paradigm that assesses the harms that can emerge from repeated human-AI interactions.
23.06.2025 17:51 — 👍 6 🔁 3 💬 0 📌 0
Dear ChatGPT, Am I the Asshole?
While Reddit users might say yes, your favorite LLM probably won’t.
We present Social Sycophancy: a new way to understand and measure sycophancy as how LLMs overly preserve users' self-image.
21.05.2025 16:51 — 👍 134 🔁 31 💬 6 📌 3
On 4/10 & 4/11, we're hosting our symposium "AI and Democratic Freedoms." Thrilled to have @ghadfield.bsky.social, @lujain.bsky.social, @sydneylevine.bsky.social, and Hoda Heidari and moderator @hlntnr.bsky.social for our third panel RSVP: www.eventbrite.com/e/artificial...
04.04.2025 13:59 — 👍 8 🔁 2 💬 1 📌 1
📈Out today in @PNASNews!📈
In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of LLMs yields sharply diminishing persuasive returns for static political messages.
🧵:
07.03.2025 18:28 — 👍 39 🔁 20 💬 1 📌 3
Artificial Intelligence and Democratic Freedoms
EVENT: Join us for Artificial Intelligence and Democratic Freedoms on April 10-11 at
@columbiauniversity.bsky.social & online. Hosted with Senior AI Advisor @sethlazar.org. Co-sponsored by the Knight Institute & @columbiaseas.bsky.social. Panel info in 🧵. RSVP: knightcolumbia.org/events/artif...
07.03.2025 15:31 — 👍 14 🔁 7 💬 1 📌 2
Panel 3: Eval. & Design of Safe AI. 2:15pm, 4/10. @ghadfield.bsky.social (@hopkinsengineer.bsky.social), Hoda Heidari (@carnegiemellon.bsky.social), @lujain.bsky.social (@ox.ac.uk), @sydneylevine.bsky.social (Allen Institute for AI), & @hlntnr.bsky.social (Cntr for Security & Emerging Technology).
07.03.2025 15:31 — 👍 4 🔁 1 💬 1 📌 0
Congratulations to @oii.ox.ac.uk DPhil student, @lujain.bsky.social co-author of a new pre-print which considers a new method for evaluating LLMs. Thanks for sharing @agstrait.bsky.social!
25.02.2025 16:55 — 👍 4 🔁 2 💬 0 📌 0
Thanks for reading & sharing, Andrew!
15.02.2025 17:15 — 👍 2 🔁 0 💬 0 📌 0
@lujain.bsky.social has published an excellent new paper exploring anthropomorphic behaviours in LLMs. Notable finding - majority of these behaviours occur after multi-turn interactions
arxiv.org/abs/2502.07077
13.02.2025 14:35 — 👍 12 🔁 4 💬 1 📌 1
Recruiting 2x three-year postdoctoral researchers and 1-2 PhD students – Synthetic Society
The Synthetic Society research team at the Oxford Internet Institute invites applications from enthusiastic and motivated candidates for two postdoctoral positions and 1-2 PhD positions in 2025, worki...
🚨 I'm recruiting 2x postdocs and 1-2 DPhil (PhD) students at Oxford to work on AI, Privacy-Enhancing Technologies, and public interest technology research.
Interested in human-centred and critical approaches to study the impact of data and algorithms on society? Join us next year!
17.12.2024 11:23 — 👍 16 🔁 16 💬 1 📌 2
👋 Join us on 14 March
🚀 Kick off your weekend with a curated dose of information, upcoming events, and must-reads in the latest issue of our newsletter.
🔎 sh1.sendinblue.com/3g7uih7xzlxp...
✍🏼 checkfirst.network/newsletter/
-- @lujain.bsky.social @alia.bsky.social @dscheykopp.bsky.social @ulrikeklinger.bsky.social
01.03.2024 15:20 — 👍 1 🔁 1 💬 1 📌 0
Thanks for sharing!
01.03.2024 15:35 — 👍 1 🔁 0 💬 0 📌 0
Moderating Model Marketplaces
RSM welcomes Robert Gorwa and Michael Veale for a discussion of their research into the governance questions raised by the moderation of model marketplaces.
want to hear Rob Gorwa and I talk about what content moderation of uploaded AI models on platforms like Hugging Face, GitHub and Civitai tells us about the future of who analyses open source dual use models? Mar 6 1230ET online organised by Berkman Klein. register: cyber.harvard.edu/events/moder...
27.01.2024 13:04 — 👍 13 🔁 7 💬 0 📌 0
@tabisamra.bsky.social is here! And @maitha.bsky.social! And @paularambles.bsky.social! where are my other ny*ad skeeps? who am i mising?
28.08.2023 15:06 — 👍 5 🔁 1 💬 3 📌 0
Omg this is amazing!! All my faves 💫❤️
28.08.2023 21:10 — 👍 2 🔁 0 💬 0 📌 0
I love this
20.08.2023 10:06 — 👍 0 🔁 0 💬 1 📌 0
A watercolor of a rabbit holding a spear and riding atop a giant snail with a saddle and reins
when i say i'm omw, this is what i mean
12.08.2023 16:48 — 👍 1288 🔁 312 💬 12 📌 12
Such a good thread!
12.08.2023 20:19 — 👍 1 🔁 1 💬 0 📌 0
Screen grab from @lightintheatticrecords on IG: A boxing instructional illustration in four parts. Left: DON’T TALK TO ME; right: UNTIL I’VE HAD MY; upper cut with the right: AMBIENT; hard left for the win: MUSIC.
Logging on
06.08.2023 14:25 — 👍 20 🔁 5 💬 2 📌 0
Amazing!! Welcome 🥳
04.08.2023 21:25 — 👍 1 🔁 0 💬 0 📌 0
a little something to take the edge off
04.08.2023 19:24 — 👍 353 🔁 89 💬 11 📌 9
designers of bluesky! we're hiring a short-term designer for a mozilla-funded simple educational game(ish) / interactive explainer on social media algorithmic feeds. email me aae322@nyu.edu for more info!
24.07.2023 13:43 — 👍 5 🔁 3 💬 2 📌 0
My least favorite place on earth is Heathrow
04.07.2023 14:22 — 👍 0 🔁 0 💬 0 📌 0
dead inside and cherishing my little plants
03.06.2023 14:28 — 👍 560 🔁 114 💬 15 📌 0
🥺this skeet made my day. I miss the Records days💔 Can't wait to show you what we're making!!
02.06.2023 19:39 — 👍 1 🔁 0 💬 1 📌 0
how shall we live together?
societal impacts researcher at Anthropic
saffronhuang.com
Policy Director, Knight First Amendment Institute at Columbia University | Senior Non-Resident Fellow, Center for International Policy. Opinions mine.
Director, Knight First Amendment Institute at Columbia University; Exec Editor, Just Security; former ACLU. knightcolumbia.org.
PhD student at the Oxford Internet Institute, researching data access solutions for public interest researchers.
data access · AI auditing, fairness and accountability · human-centered data science
Researching the bad, building the good.
Research & creative studio dedicated to critically demystifying emerging tech for broad audiences. decifer.tech
🇿🇦 | Fairness in AI | University of Oxford | Deep Learning Indaba | Internships: Google DeepMind || Microsoft Research
Lecturer in AI, Government & Policy at the @oii.ox.ac.uk| Investigating the environmental impacts of AI from silicon to e-waste | Associate Editor at Big Data & Society | Fellow at UCL IAS
https://www.oii.ox.ac.uk/people/profiles/ana-valdivia
asst. prof. @uwsjmc.bsky.social rossdahlke.com
Professor of Data Ethics and Policy & Director of Research at the Oxford Internet Institute, University of Oxford. Founder of the Governance of Emerging Technologies research programme.
Assistant Professor @uwmadison.bsky.social
Professor of Human Behaviour and Technology at the University of Oxford ▦ Father ▦ Gamer ▦ Psychologist ▦
Head of Trust Framework, OfDIA/DSIT
Developed the first liberal political philosophy of British digital identity systems (for a DPhil at @oiioxford.bsky.social)
chsmith.co.uk
Research Fellow in AI and News, Reuters Institute, Oxford University | Research Associate & DPhil, Oxford Internet Institute | AI, news, misinfo, tech, democracy | Affiliate Tow Center, CITAP | Media advisor | My views etc…
https://www.felixsimon.net/
Prof @Oxford, Affiliate @MIT #SocialMedia #Misinformation #Polarization www.MohsenMosleh.com
We pioneered algorithm analysis and deploy expert skills to probe the digital space and its actors.
https://checkfirst.network
PhD researcher at University of Oxford working on digitalisation, party competition, public opinion, and elections. She/her. (@politicsoxford.bsky.social, previously @oii.ox.ac.uk)
President of Signal, Chief Advisor to AI Now Institute
The AI Now Institute produces diagnosis and actionable policy research on artificial intelligence.
Find us at https://ainowinstitute.org/
AI policy @mozilla | grad student @oiioxford | proud @ameja member | prev @googlenewsinit @googletrends