Andreas Jungherr

Andreas Jungherr

@ajungherr.bsky.social

Making sense of digital technology - the changes it brings, the opportunities it provides, and the challenges it presents. Professor, University of Bamberg.

2,050 Followers 145 Following 194 Posts Joined Oct 2023
1 week ago
Post image

Don’t hate the player, hate the tools: AI in US Political Campaigning Edition

I always read papers published by and so should you (I think). His latest is just out in Political Communication (buff.ly/dL5KVHx), with colleagues Adrian Rauchfleisch & Alexander Wuttke.

4 1 2 0
1 week ago

Thank you, @felixsimon.bsky.social. Great to hear that the paper is useful!

1 0 0 0
2 weeks ago

The integration of AI in campaign operations and voter outreach is evolving rapidly and will become a core concern within the conduct, regulation, and study of campaigns worldwide. There remains much to do. We will be back. 13/

0 0 0 0
2 weeks ago
Preview
AI & Elections Clinic | TJ Pyche | Substack The AI & Elections Clinic is designed to be the place that tracks, nudges, slows, and shows how artificial intelligence and elections interact in the years ahead. Click to read AI & Elections Clinic, ...

And there are many other sources that surface and discuss different uses and experiences with AI (see
@katieharbath.bsky.social, @msifry.bsky.social, aiandelections.substack.com, Higher Ground Labs). 12/

4 1 1 0
2 weeks ago

We are starting to see strong work identifying how and to what effect AI is used in campaigns. See work by @florianfoos.bsky.social , @giuliasandri.bsky.social, @meinungsfuehrer.bsky.social, @pjost.bsky.social, and @profkatedommett.bsky.social. 11/

2 1 1 0
2 weeks ago

We fielded our surveys in early 2024. Since then, much has happened. Both public awareness of AI and its integration into everyday campaign practice has accelerated rapidly. As per usual academic accounts are only beginning to catch up. 10/

0 0 1 0
2 weeks ago

For campaigners and regulators, the findings suggest that deceptive AI use may be electorally low-risk but systemically costly, accelerating demand for blunt regulation, while more mundane AI uses face far less public resistance. 9/

0 0 1 0
2 weeks ago

This shows how campaign practices can function as exemplars, shaping public attitudes toward AI governance far beyond elections. 8/

1 0 1 0
2 weeks ago
Post image

Importantly, the consequences of deceptive AI use emerge elsewhere. Information about AI deception increases feelings of lost control and support for restrictive AI regulation, including calls for halting AI development more broadly. 7/

0 0 1 0
2 weeks ago

This shows a misalignment between public norms and electoral incentives, likely driven by motivated reasoning and polarization. The study thus speaks directly to classic debates in political communication about norm enforcement, negativity, and democratic accountability. 6/

0 0 1 0
2 weeks ago
Post image

Importantly, and counterintuitively:
Normative disapproval does not translate into electoral penalties.
Even when people see deceptive AI use as norm-breaking, party favorability remains unchanged among supporters, opponents, and independents. 5/

0 0 1 0
2 weeks ago
Post image

Deceptive AI uses (e.g., deepfakes, impersonation, interactive astroturfing) are consistently seen as violating norms of legitimate political competition, while operational and outreach uses are evaluated more ambivalently. 4/

1 1 1 0
2 weeks ago
Post image

Empirically, we draw on a representative survey and two preregistered survey experiments (n = 7,635) to map public reactions across these AI use types, including perceptions of norm violations, democratic harm, and governance preferences. 3/

0 0 1 0
2 weeks ago

Our first contribution is conceptual: we identify three analytically distinct types of AI use in election campaigns

- Campaign operations
- Voter outreach
- Deception

This set accounts for the wide variety of AI use in campaigning and moves the debate beyond its myopic focus on deepfakes. 2/

1 1 1 0
2 weeks ago
Post image

Political campaigns worldwide experiment with AI. But how do people see different electoral uses of AI and with what consequences?

In a new study in @polcommjournal.bsky.social with @adrauc.bsky.social and @kunkakom.bsky.social, we address these questions. www.tandfonline.com/doi/full/10.... 1/

15 7 1 0
1 month ago
Preview
KI und Demokratie: Emmy Noether-Förderung für LMU-Politologen Alexander Wuttke erhält eine Förderung aus dem Emmy Noether-Programm der DFG.

Warum bekennen sich viele Menschen zur #Demokratie, wählen aber Politikerinnen & Politiker, die diese untergraben? Damit beschäftigt sich LMU-Politikwissenschaftler Alexander Wuttke, der nun eine #Förderung von 1,17 Millionen aus dem Emmy Noether-Programm der @dfg.de erhalten hat! #LMUMuenchen

37 5 1 0
2 months ago
Post image

Nicht Algorithmen oder Plattformen allein sind schuld an der zunehmenden Polarisierung – es ist komplexer. @ajungherr.bsky.social erforscht, u. a. am bidt, wie digitale Medien politische Kommunikation verändern.

Mehr zu seiner Person & Forschung im Porträt: www.bidt.digital/im-portraet-...

6 3 1 0
3 months ago
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

📄 Open Access paper:
Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States.
Published in @socialmedia-soc.bsky.social.
journals.sagepub.com/doi/10.1177/...

1 0 0 0
3 months ago

Our findings highlight the need to:
• Recognize public heterogeneity across and within countries
• Build transparent governance frameworks
• Carefully distinguish between safety-related and value-laden interventions
• Avoid assuming that alignment preferences are universal

0 0 1 0
3 months ago

📌 Why this matters:
Debates about AI alignment often focus on technical challenges.
But alignment is also political: public expectations shape what people see as legitimate, trustworthy, and acceptable interventions in AI governance.

0 0 1 0
3 months ago

We also find consistent effects for:
• Political partisanship: Green/Democratic identifiers more supportive of all forms of output adjustments.
• Gender: Women show stronger support, especially for safety and bias-mitigating interventions.

0 0 1 0
3 months ago

🇩🇪 In Germany, attitudes vary more with personal experience, free speech orientations, and political ideology.
🇺🇸 In the U.S., views are more uniform except for the promotion of aspirational imaginaries, where political ideology plays a stronger role.

0 0 1 0
3 months ago

🇺🇸🇩🇪 Cross-national differences:
U.S. respondents consistently show higher support for most alignment goals, except for the promotion of aspirational imaginaries.
They also report much higher AI use, which we interpret as greater societal involvement with AI and more consolidated expectations.

0 0 1 0
3 months ago

But support drops for bias mitigation and especially for aspirational imaginaries, AI outputs that promote particular social values. These value-laden interventions are viewed more cautiously.

0 0 1 0
3 months ago

🔍 Key finding:
Across both countries, accuracy and safety top the list. People want AI systems that are factually reliable and avoid harmful content. Broad, cross-national consensus.

0 0 1 0
3 months ago

We ran surveys in Germany (n=1800) and the U.S. (n=1756) to understand what people expect from AI-enabled systems across four #alignment goals:

• Accuracy & reliability
• Safety
• Bias mitigation
• Providing aspirational imaginaries

0 0 1 0
3 months ago
Post image

📢 New paper out!
What do people want from AI systems? How should outputs be adjusted? And how do views differ between countries?
@adrauc.bsky.social and I explore this for @socialmedia-soc.bsky.social in Public Opinion on the Politics of AI Alignment.

journals.sagepub.com/doi/10.1177/...

4 2 1 0
3 months ago
Post image

My Nieman Lab prediction for 2026: The AI bubble may pop but people’s use of AI for information won’t and it's better if we start taking this seriously.

22 11 1 2
3 months ago
Preview
Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change Public product launches in Artificial Intelligence can serve as focusing events for collective attention, surfacing how societies react to technologic…

Digital public debates offer unique insights into how people make sense of technological change, and highlight cross-national differences in culture, politics, and expectations.

You can find the paper with full findings here: www.sciencedirect.com/science/arti...

0 0 0 0
3 months ago

💡 Takeaway
How societies talk about AI is tied to economic interests and cultural values.
These conversations don’t just reflect attitudes toward technology - they signal future societal fault lines.

0 0 1 0