Very cool! The platform works very well
Deepfake pornography isn’t going away just because we are passing laws and taking down a couple of big websites.
Our new pre-print, led by @aedcv.bsky.social suggests that the sharing of this material continued to prosper even after platform and policy shocks.
arxiv.org/abs/2602.02754
We are looking for a doctoral researcher to work with us on a supercool project in collaboration with linguists. The deadline is Feb 15th, contact me if you have any questions!
stellen.uni-konstanz.de/jobposting/9...
Do reasoning models have real “Aha!” moments—mid-chain realizations where they intrinsically self-correct?
In a new pre-print, “The Illusion of Insight in Reasoning Models," led by @liv-daliberti.bsky.social we provide strong evidence that they do not!
📜: arxiv.org/abs/2601.00514
We rely on benchmarks to answer questions they weren’t designed to ask. This post thoughtfully explores the "empiricism gap" in ML/CS, and what social-science methods can offer.
A great read for both CS and social sciences folks.
Congrats Anna!! 🥳
🌱✨ Life update: I just started my PhD at Princeton University!
I will be supervised by @manoelhortaribeiro.bsky.social and affiliated with Princeton CITP.
It's only been a month, but the energy feels amazing —very grateful for such a welcoming community. Excited for what’s ahead! 🚀
Social media feeds today are optimized for engagement, often leading to misalignment between users' intentions and technology use.
In a new paper, we introduce Bonsai, a tool to create feeds based on stated preferences, rather than predicted engagement.
arxiv.org/abs/2509.10776
✍️ I wrote a short piece for the #SPSPblog about our work on AI persuasion (w/ @manoelhortaribeiro.bsky.social @ricgallotti.bsky.social Robert West).
Read it at: t.co/MipJKWbb1h.
Thanks @andyluttrell.bsky.social @prpietromonaco.bsky.social @spspnews.bsky.social for your invitation and feedback!
🚨YouTube is a key source of health info, but it’s also rife with dangerous myths on opioid use disorder (OUD), a leading cause of death in the U.S.
To understand the scale of such misinformation, our #EMNLP2025 paper introduces MythTriage, a scalable system to detect OUD myth🧵
EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.
Read more: actu.epfl.ch/news/apertus...
Another paper showing AI (Claude 3.5) is more persuasive than the average human, even when the humans had financial incentives
In this case, either AI or humans (paid if they were persuasive) tried to convince quiz takers (paid for accuracy) to pick either right or wrong answers on a quiz.
📣 Super excited to organize the first workshop on ✨NLP for Democracy✨ at COLM @colmweb.org!!
Check out our website: sites.google.com/andrew.cmu.e...
Call for submissions (extended abstracts) due June 19, 11:59pm AoE
#COLM2025 #LLMs #NLP #NLProc #ComputationalSocialScience
A study in Nature Human Behaviour finds that large language models (LLMs), such as GPT-4, can be more persuasive than humans 64% of the time in online debates when adapting their arguments based on personalised information about their opponents. go.nature.com/4j9ibyE 🧪
Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models might do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.
I also have another preprint out showing similar results on Claude Sonnet 3.5 in interactive quizzes with highly incentivised humans, both in truthful and deceptive persuasion. More on this at: arxiv.org/abs/2505.09662
If you're interested in knowing more, you can find a more detailed breakdown on our methodology and results at: x.com/fraslv/statu...
Or read the full paper at nature.com/articles/s41...
Thanks to my amazing coauthors @manoelhortaribeiro.bsky.social @ricgallotti.bsky.social Robert West
That raises urgent questions about possible misuse in political propaganda, misinformation, and election interference.
Platforms and regulators should seriously consider these risks and step up in our discussion about guardrails, transparency, and accountability.
📢📜 Excited to share that our paper "On the conversational persuasiveness of GPT-4" has been published in Nature Human Behaviour!
🤖 Key takeaway: LLMs can already reach superhuman persuasiveness, especially when given access to personalized information
www.nature.com/articles/s41...
“Obviously as soon as people see that you can persuade people more with LLMs, they’re going to start using them. I find it both fascinating and terrifying,” says @frasalvi.bsky.social
Read more on persuasive chatbots in my rather terrifying piece for @nature.com 🧪
www.nature.com/articles/d41...
If your NSF grant has been terminated, please, please report it here:
airtable.com/appGKlSVeXni...
Collecting this information is supremely helpful to organize and facilitate a response.
I am recruiting 2 PhD students for Fall'25 @csaudk.bsky.social to work on bleeding-edge topics in #NLProc #LLMs #AIAgents (e.g. LLM reasoning, knowledge-seeking agents, and more).
Details: www.cs.au.dk/~clan/openings
Deadline: May 1, 2025
Please boost!
cc: @aicentre.dk @wikiresearch.bsky.social
🚨 #IC2S2’25 Call for Abstract deadline is just around the corner—Feb 24, 2025
Submit your abstract now: www.ic2s2-2025.org/submit-abstr... and join us in Norrköping, Sweden.
Tutorials announcement coming soon!
New tool to estimate the level of participation in collective action expressed in natural language.
Applied to social media, it can produce large-scale and granular estimates of behavior change wrt collective action.
github.com/ariannap13/e...
@nerdsitu.bsky.social @itu.dk @carlsbergfondet.dk
Just arrived in Trento for cs2italy.org, the first Italian conference on CSS: excited to see the Italian community gathering together!
🤖 I'll be presenting our work on AI persuasion [1] tomorrow morning at 11:15 to session 1A — come say hello!
[1] arxiv.org/abs/2403.14380
How effective are LLMs are persuading and deceiving people? In a new preprint we review different theoretical risks of LLM persuasion; empirical work measuring how persuasive LLMs currently are; and proposals to mitigate these risks. 🧵
arxiv.org/abs/2412.17128
Before you all delete your accounts on X, you should consider deleting content but "donating" them to science. Many institutions, such as @gesis-dataservices.bsky.social might use them to scrape more effectively than via burner accounts.
What do Samy Bengio, Michael Bronstein (@mmbronstein.bsky.social), and Annie Hartely have in common, apart from being brilliant scientists?
They are now professors at EPFL. Welcome!!! 🤗🚀
actu.epfl.ch/news/appoint...
GPT-4 is able to pass on average 91.7% of EPFL core courses, raising significant concerns about the vulnerability of higher education to AI assistants.
Timely large-scale study mobilising an army of scholars across EPFL, including my small contribution to the evaluation efforts ✍️
More below ⬇️