Don’t hate the player, hate the tools: AI in US Political Campaigning Edition
I always read papers published by and so should you (I think). His latest is just out in Political Communication (buff.ly/dL5KVHx), with colleagues Adrian Rauchfleisch & Alexander Wuttke.
Thank you, @felixsimon.bsky.social. Great to hear that the paper is useful!
The integration of AI in campaign operations and voter outreach is evolving rapidly and will become a core concern within the conduct, regulation, and study of campaigns worldwide. There remains much to do. We will be back. 13/
And there are many other sources that surface and discuss different uses and experiences with AI (see
@katieharbath.bsky.social, @msifry.bsky.social, aiandelections.substack.com, Higher Ground Labs). 12/
We are starting to see strong work identifying how and to what effect AI is used in campaigns. See work by @florianfoos.bsky.social , @giuliasandri.bsky.social, @meinungsfuehrer.bsky.social, @pjost.bsky.social, and @profkatedommett.bsky.social. 11/
We fielded our surveys in early 2024. Since then, much has happened. Both public awareness of AI and its integration into everyday campaign practice has accelerated rapidly. As per usual academic accounts are only beginning to catch up. 10/
For campaigners and regulators, the findings suggest that deceptive AI use may be electorally low-risk but systemically costly, accelerating demand for blunt regulation, while more mundane AI uses face far less public resistance. 9/
This shows how campaign practices can function as exemplars, shaping public attitudes toward AI governance far beyond elections. 8/
Importantly, the consequences of deceptive AI use emerge elsewhere. Information about AI deception increases feelings of lost control and support for restrictive AI regulation, including calls for halting AI development more broadly. 7/
This shows a misalignment between public norms and electoral incentives, likely driven by motivated reasoning and polarization. The study thus speaks directly to classic debates in political communication about norm enforcement, negativity, and democratic accountability. 6/
Importantly, and counterintuitively:
Normative disapproval does not translate into electoral penalties.
Even when people see deceptive AI use as norm-breaking, party favorability remains unchanged among supporters, opponents, and independents. 5/
Deceptive AI uses (e.g., deepfakes, impersonation, interactive astroturfing) are consistently seen as violating norms of legitimate political competition, while operational and outreach uses are evaluated more ambivalently. 4/
Empirically, we draw on a representative survey and two preregistered survey experiments (n = 7,635) to map public reactions across these AI use types, including perceptions of norm violations, democratic harm, and governance preferences. 3/
Our first contribution is conceptual: we identify three analytically distinct types of AI use in election campaigns
- Campaign operations
- Voter outreach
- Deception
This set accounts for the wide variety of AI use in campaigning and moves the debate beyond its myopic focus on deepfakes. 2/
Political campaigns worldwide experiment with AI. But how do people see different electoral uses of AI and with what consequences?
In a new study in @polcommjournal.bsky.social with @adrauc.bsky.social and @kunkakom.bsky.social, we address these questions. www.tandfonline.com/doi/full/10.... 1/
Warum bekennen sich viele Menschen zur #Demokratie, wählen aber Politikerinnen & Politiker, die diese untergraben? Damit beschäftigt sich LMU-Politikwissenschaftler Alexander Wuttke, der nun eine #Förderung von 1,17 Millionen aus dem Emmy Noether-Programm der @dfg.de erhalten hat! #LMUMuenchen
Nicht Algorithmen oder Plattformen allein sind schuld an der zunehmenden Polarisierung – es ist komplexer. @ajungherr.bsky.social erforscht, u. a. am bidt, wie digitale Medien politische Kommunikation verändern.
Mehr zu seiner Person & Forschung im Porträt: www.bidt.digital/im-portraet-...
📄 Open Access paper:
Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States.
Published in @socialmedia-soc.bsky.social.
journals.sagepub.com/doi/10.1177/...
Our findings highlight the need to:
• Recognize public heterogeneity across and within countries
• Build transparent governance frameworks
• Carefully distinguish between safety-related and value-laden interventions
• Avoid assuming that alignment preferences are universal
📌 Why this matters:
Debates about AI alignment often focus on technical challenges.
But alignment is also political: public expectations shape what people see as legitimate, trustworthy, and acceptable interventions in AI governance.
We also find consistent effects for:
• Political partisanship: Green/Democratic identifiers more supportive of all forms of output adjustments.
• Gender: Women show stronger support, especially for safety and bias-mitigating interventions.
🇩🇪 In Germany, attitudes vary more with personal experience, free speech orientations, and political ideology.
🇺🇸 In the U.S., views are more uniform except for the promotion of aspirational imaginaries, where political ideology plays a stronger role.
🇺🇸🇩🇪 Cross-national differences:
U.S. respondents consistently show higher support for most alignment goals, except for the promotion of aspirational imaginaries.
They also report much higher AI use, which we interpret as greater societal involvement with AI and more consolidated expectations.
But support drops for bias mitigation and especially for aspirational imaginaries, AI outputs that promote particular social values. These value-laden interventions are viewed more cautiously.
🔍 Key finding:
Across both countries, accuracy and safety top the list. People want AI systems that are factually reliable and avoid harmful content. Broad, cross-national consensus.
We ran surveys in Germany (n=1800) and the U.S. (n=1756) to understand what people expect from AI-enabled systems across four #alignment goals:
• Accuracy & reliability
• Safety
• Bias mitigation
• Providing aspirational imaginaries
📢 New paper out!
What do people want from AI systems? How should outputs be adjusted? And how do views differ between countries?
@adrauc.bsky.social and I explore this for @socialmedia-soc.bsky.social in Public Opinion on the Politics of AI Alignment.
journals.sagepub.com/doi/10.1177/...
My Nieman Lab prediction for 2026: The AI bubble may pop but people’s use of AI for information won’t and it's better if we start taking this seriously.
Digital public debates offer unique insights into how people make sense of technological change, and highlight cross-national differences in culture, politics, and expectations.
You can find the paper with full findings here: www.sciencedirect.com/science/arti...
💡 Takeaway
How societies talk about AI is tied to economic interests and cultural values.
These conversations don’t just reflect attitudes toward technology - they signal future societal fault lines.