Lukas Warode's Avatar

Lukas Warode

@lwarode.bsky.social

Political Science PhD Student, University of Mannheim. lwarode.github.io

354 Followers  |  404 Following  |  39 Posts  |  Joined: 28.09.2023  |  2.1215

Latest posts by lwarode.bsky.social on Bluesky

Post image Post image Post image

partycoloR is now on CRAN! Started as a simple idea 6 years ago, now it's a full-featured package. Extract party colors and logos from Wikipedia with one line of code. It's already powering ParlGov Dashboard.

install.packages("partycoloR")

28.01.2026 08:20 โ€” ๐Ÿ‘ 99    ๐Ÿ” 20    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

The app shows how German politicians associate words with "left" or "right" based on ideological in- and out-group narratives and contested concepts. For example, both ideological sides claim the term "freedom."

21.01.2026 08:19 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This question became the topic of my 2nd dissertation paper. I also considered creating an app to communicate the results efficiently and allow you to explore the patterns yourself. Iโ€™ve used Shiny for years, but "AI-assisted agentic engineering" (aka vibe coding ๐Ÿ˜‚) really helped here a lot.

21.01.2026 08:19 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Words like "patriotism" and "racism" are often associated with the right, while "solidarity" and "socialism" are associated with the left.

But who uses these associations, and how do political positions matter?

๐Ÿ“Š App: lukas-warode.shinyapps.io/lr-words-map/
๐Ÿ“„ Paper: www.nature.com/articles/s41...

21.01.2026 08:19 โ€” ๐Ÿ‘ 11    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

"When Conservatives See Red but Liberals Feel Blue: Labeler Characteristics and Variation in Content Annotation" by
Nora Webb Williams, Andreu Casas, Kevin Aslett, and John Wilkerson.
www.journals.uchicago.edu/doi/10.1086/...

21.01.2026 07:02 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Congrats!!

19.01.2026 10:48 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Serienempfehlungen: Fargo, The Sopranos, vielleicht noch Narcos :)

13.01.2026 10:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

The Call for Papers and Panels for #COMPTEXT2026 in Birmingham (23-25 April) is out; feel free to circulate: shorturl.at/gRg0p!
Deadline: January 16!

17.12.2025 09:06 โ€” ๐Ÿ‘ 20    ๐Ÿ” 15    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 4
Post image 23.11.2025 12:11 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Leben und Tod der DiD

19.11.2025 20:37 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Dissertation track? ๐Ÿ˜‰

19.11.2025 15:37 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Slavoj ลฝiลพek meme image

Slavoj ลฝiลพek meme image

โ€œYou see, the endless renovation of the Stuttgart train station is a symbol of our late-capitalist condition: the project is always โ€˜in progress,โ€™ yet nothing ever progresses. The construction site itself becomes the true destination.โ€

19.11.2025 13:17 โ€” ๐Ÿ‘ 55    ๐Ÿ” 7    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Post image

www.instagram.com/p/DQPf_pJiG8...

Is it a fit?

29.10.2025 19:10 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Job Alert! We are hiring two post-docs (full time, 4+ years) in our project SCEPTIC - Social, Computational and Ethical Premises of Trust and Informational Cohesion with @annanosthoff.bsky.social @guzoch.bsky.social and Prof. Andreas Peters (uol.de/informatik/s...)

17.10.2025 09:15 โ€” ๐Ÿ‘ 28    ๐Ÿ” 32    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2
Post image

๐Ÿ“ฃ New Preprint!
Have you ever wondered what the political content in LLM's training data is? What are the political opinions expressed? What is the proportion of left- vs right-leaning documents in the pre- and post-training data? Do they correlate with the political biases reflected in models?

29.09.2025 14:54 โ€” ๐Ÿ‘ 47    ๐Ÿ” 14    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Preview
The threat of analytic flexibility in using large language models to simulate human data: A call to attention Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...

Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD ๐Ÿงต

18.09.2025 07:56 โ€” ๐Ÿ‘ 338    ๐Ÿ” 156    ๐Ÿ’ฌ 12    ๐Ÿ“Œ 59
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

๐Ÿšจ New paper alert ๐Ÿšจ Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

12.09.2025 10:33 โ€” ๐Ÿ‘ 303    ๐Ÿ” 106    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 23

Implications for political behaviour, communication, and representation are manifold, as 'left' and 'right' are central categories in polarised public discourse โ€“ which is particularly evident in pejorative usage, such as labelling political opponents as 'racist' or 'socialist'.

26.08.2025 09:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Both in- and out-ideological associations are externally validated by serving as seed words to scale parliamentary speeches. The resulting ideal points reflect party ideology across different specifications in the German Bundestag.

26.08.2025 09:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The mapping is based on associations from open-ended survey responses in German candidate surveys. Words are mapped into a semantic space using word embeddings and weighted by frequency. Construct validity is ensured by using alternative embeddings and frequency weightings.

26.08.2025 09:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Words associated with both left and the right are mapped to the semantic centre, where connotations can vary: 'freedom' has a positive connotation (it is primarily used by the respective in-group to describe left and the right), while 'politics' has a rather neutral connotation.

26.08.2025 09:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

This framework yields associations that are driven by positive (in-ideology) and negative (out-ideology) associations. Examples: 'justice' (left) and 'patriotism' (right) are in-ideological associations; 'socialism' (left) and 'racism' (right) are out-ideological associations.

26.08.2025 09:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Left and right are essential poles in political discourse. We know little about how they are associated across the spectrum. I propose a 2-dimensional model that accounts for both semantics โ€“ is a term left or right โ€“ and position โ€“ are associations coming from the left or right.

26.08.2025 09:38 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

My 2nd dissertation paper is out in @nature.com Humanities and Social Sciences Communications: www.nature.com/articles/s41...

I study and explore how associations with 'left' and 'right' vary systematically by semantic and political position.

26.08.2025 09:38 โ€” ๐Ÿ‘ 22    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Ja, die goldene Twitterzeit ist leider over

25.08.2025 13:02 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Sitcom Laugh Track
YouTube video by SamGordonRHK Sitcom Laugh Track

youtu.be/4VTBMznLrWs?...

25.08.2025 12:59 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿ“ข New Publication Alert!
Our (@msaeltzer.bsky.social)
latest article, "Issue congruence between candidates' Twitter communication and constituencies in an MMES: Migration as an exemplary case", has just been published in Parliamentary Affairs.
academic.oup.com/pa/advance-a...

13.08.2025 17:57 โ€” ๐Ÿ‘ 23    ๐Ÿ” 11    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Title and abstract of the paper.

Title and abstract of the paper.

Now out in Social Networks

Network analysis aspires to be โ€œanticategorical,โ€ yet its basic unitsโ€”relationshipsโ€”are usually readily categorized ('friendship,' 'love'). Thus, a nontrivial cultural typification is asserted in the very building blocks of most network analyses.

doi.org/10.1016/j.so...

07.08.2025 14:22 โ€” ๐Ÿ‘ 30    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
a cartoon character from south park says " i m gonna need your help " ALT: a cartoon character from south park says " i m gonna need your help "

Calling all parliaments experts!
Say there's a debate in parliament, and a related vote. How frequently would these be on different days? different weeks? I don't mean different readings of bills, because these will also have different debates.
@sgparliaments.bsky.social #polisky #parlisky

22.07.2025 16:06 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Can banning political ideologies protect democracy? ๐Ÿ›ก๏ธ๐Ÿ†š๐Ÿ—ฃ๏ธ

Our (w. @valentimvicente.bsky.social) paper finds: punishing individuals might backfire. We study a West German policy banning "extreme left" individuals from working for the state.

#Democracy #PoliticalScience

๐Ÿงต

url: osf.io/usqdb_v2

10.07.2025 09:54 โ€” ๐Ÿ‘ 112    ๐Ÿ” 37    ๐Ÿ’ฌ 7    ๐Ÿ“Œ 5

@lwarode is following 20 prominent accounts