The integration of AI in campaign operations and voter outreach is evolving rapidly and will become a core concern within the conduct, regulation, and study of campaigns worldwide. There remains much to do. We will be back. 13/
23.02.2026 09:47 β
π 0
π 0
π¬ 0
π 0
We are starting to see strong work identifying how and to what effect AI is used in campaigns. See work by @florianfoos.bsky.social , @giuliasandri.bsky.social, @meinungsfuehrer.bsky.social, @pjost.bsky.social, and @profkatedommett.bsky.social. 11/
23.02.2026 09:44 β
π 2
π 1
π¬ 1
π 0
We fielded our surveys in early 2024. Since then, much has happened. Both public awareness of AI and its integration into everyday campaign practice has accelerated rapidly. As per usual academic accounts are only beginning to catch up. 10/
23.02.2026 09:42 β
π 0
π 0
π¬ 1
π 0
For campaigners and regulators, the findings suggest that deceptive AI use may be electorally low-risk but systemically costly, accelerating demand for blunt regulation, while more mundane AI uses face far less public resistance. 9/
23.02.2026 09:04 β
π 0
π 0
π¬ 1
π 0
This shows how campaign practices can function as exemplars, shaping public attitudes toward AI governance far beyond elections. 8/
23.02.2026 09:03 β
π 1
π 0
π¬ 1
π 0
Importantly, the consequences of deceptive AI use emerge elsewhere. Information about AI deception increases feelings of lost control and support for restrictive AI regulation, including calls for halting AI development more broadly. 7/
23.02.2026 09:02 β
π 0
π 0
π¬ 1
π 0
This shows a misalignment between public norms and electoral incentives, likely driven by motivated reasoning and polarization. The study thus speaks directly to classic debates in political communication about norm enforcement, negativity, and democratic accountability. 6/
23.02.2026 08:58 β
π 0
π 0
π¬ 1
π 0
Importantly, and counterintuitively:
Normative disapproval does not translate into electoral penalties.
Even when people see deceptive AI use as norm-breaking, party favorability remains unchanged among supporters, opponents, and independents. 5/
23.02.2026 08:57 β
π 0
π 0
π¬ 1
π 0
Deceptive AI uses (e.g., deepfakes, impersonation, interactive astroturfing) are consistently seen as violating norms of legitimate political competition, while operational and outreach uses are evaluated more ambivalently. 4/
23.02.2026 08:52 β
π 1
π 1
π¬ 1
π 0
Empirically, we draw on a representative survey and two preregistered survey experiments (n = 7,635) to map public reactions across these AI use types, including perceptions of norm violations, democratic harm, and governance preferences. 3/
23.02.2026 08:42 β
π 0
π 0
π¬ 1
π 0
Our first contribution is conceptual: we identify three analytically distinct types of AI use in election campaigns
- Campaign operations
- Voter outreach
- Deception
This set accounts for the wide variety of AI use in campaigning and moves the debate beyond its myopic focus on deepfakes. 2/
23.02.2026 08:39 β
π 1
π 1
π¬ 1
π 0
Political campaigns worldwide experiment with AI. But how do people see different electoral uses of AI and with what consequences?
In a new study in @polcommjournal.bsky.social with @adrauc.bsky.social and @kunkakom.bsky.social, we address these questions. www.tandfonline.com/doi/full/10.... 1/
23.02.2026 08:30 β
π 15
π 7
π¬ 1
π 0
KI und Demokratie: Emmy Noether-FΓΆrderung fΓΌr LMU-Politologen
Alexander Wuttke erhΓ€lt eine FΓΆrderung aus dem Emmy Noether-Programm der DFG.
Warum bekennen sich viele Menschen zur #Demokratie, wΓ€hlen aber Politikerinnen & Politiker, die diese untergraben? Damit beschΓ€ftigt sich LMU-Politikwissenschaftler Alexander Wuttke, der nun eine #FΓΆrderung von 1,17 Millionen aus dem Emmy Noether-Programm der @dfg.de erhalten hat! #LMUMuenchen
12.02.2026 08:19 β
π 37
π 5
π¬ 1
π 0
Nicht Algorithmen oder Plattformen allein sind schuld an der zunehmenden Polarisierung β es ist komplexer. @ajungherr.bsky.social erforscht, u. a. am bidt, wie digitale Medien politische Kommunikation verΓ€ndern.
Mehr zu seiner Person & Forschung im PortrΓ€t: www.bidt.digital/im-portraet-...
05.01.2026 11:01 β
π 6
π 3
π¬ 1
π 0
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
π Open Access paper:
Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States.
Published in @socialmedia-soc.bsky.social.
journals.sagepub.com/doi/10.1177/...
10.12.2025 09:27 β
π 1
π 0
π¬ 0
π 0
Our findings highlight the need to:
β’ Recognize public heterogeneity across and within countries
β’ Build transparent governance frameworks
β’ Carefully distinguish between safety-related and value-laden interventions
β’ Avoid assuming that alignment preferences are universal
10.12.2025 09:26 β
π 0
π 0
π¬ 1
π 0
π Why this matters:
Debates about AI alignment often focus on technical challenges.
But alignment is also political: public expectations shape what people see as legitimate, trustworthy, and acceptable interventions in AI governance.
10.12.2025 09:25 β
π 0
π 0
π¬ 1
π 0
We also find consistent effects for:
β’ Political partisanship: Green/Democratic identifiers more supportive of all forms of output adjustments.
β’ Gender: Women show stronger support, especially for safety and bias-mitigating interventions.
10.12.2025 09:24 β
π 0
π 0
π¬ 1
π 0
π©πͺ In Germany, attitudes vary more with personal experience, free speech orientations, and political ideology.
πΊπΈ In the U.S., views are more uniform except for the promotion of aspirational imaginaries, where political ideology plays a stronger role.
10.12.2025 09:20 β
π 0
π 0
π¬ 1
π 0
πΊπΈπ©πͺ Cross-national differences:
U.S. respondents consistently show higher support for most alignment goals, except for the promotion of aspirational imaginaries.
They also report much higher AI use, which we interpret as greater societal involvement with AI and more consolidated expectations.
10.12.2025 09:17 β
π 0
π 0
π¬ 1
π 0
But support drops for bias mitigation and especially for aspirational imaginaries, AI outputs that promote particular social values. These value-laden interventions are viewed more cautiously.
10.12.2025 09:15 β
π 0
π 0
π¬ 1
π 0
π Key finding:
Across both countries, accuracy and safety top the list. People want AI systems that are factually reliable and avoid harmful content. Broad, cross-national consensus.
10.12.2025 09:09 β
π 0
π 0
π¬ 1
π 0
We ran surveys in Germany (n=1800) and the U.S. (n=1756) to understand what people expect from AI-enabled systems across four #alignment goals:
β’ Accuracy & reliability
β’ Safety
β’ Bias mitigation
β’ Providing aspirational imaginaries
10.12.2025 09:08 β
π 0
π 0
π¬ 1
π 0
π’ New paper out!
What do people want from AI systems? How should outputs be adjusted? And how do views differ between countries?
@adrauc.bsky.social and I explore this for @socialmedia-soc.bsky.social in Public Opinion on the Politics of AI Alignment.
journals.sagepub.com/doi/10.1177/...
10.12.2025 09:05 β
π 4
π 2
π¬ 1
π 0
My Nieman Lab prediction for 2026: The AI bubble may pop but peopleβs use of AI for information wonβt and it's better if we start taking this seriously.
05.12.2025 10:04 β
π 22
π 11
π¬ 1
π 2
π‘ Takeaway
How societies talk about AI is tied to economic interests and cultural values.
These conversations donβt just reflect attitudes toward technology - they signal future societal fault lines.
03.12.2025 15:13 β
π 0
π 0
π¬ 1
π 0
π Finding 3: Beware aggregated trends
The debate became increasingly critical over time, but not because early participants changed their views.
Rather, later entrants were systematically more skeptical.
03.12.2025 15:13 β
π 2
π 0
π¬ 1
π 0
π Finding 2: Cultural context shapes reactions
Users from individualistic cultures engaged earlier - but were also more critical.
Users from cultures with high uncertainty avoidance were less likely to expressed positive views.
03.12.2025 15:13 β
π 0
π 0
π¬ 1
π 0