Today, we presented the main results of the mental health days study 2025 (N = 8.177).
Results
> In May 2025, Austria implemented a nationwide smartphone-ban at schools
> Compared to 2024, smartphone use went down by 30 mins
> Life satisfaction went up (5.36 to 5.52)
> Depression sank (15% to 12%)
Interesting unpacking of deepfakes:
- darkfakes
- glowfakes
- foefakes
- fanfakes
“A blanket approach to “fighting deepfakes” risks treating satirical content the same as malicious attacks”
@morganwack.bsky.social & co in @techpolicypress.bsky.social
www.techpolicy.press/scrutinizing...
We've been following deepfakes for the last 7 years. This article aims to shed additional light on the topic by:
1) creating a conceptual typology of deepfakes
2) coining new concepts like 'glowfakes' and 'fanfakes'
3) & analyzing deepfakes from the 2024 elections
@grailcenter.bsky.social
'Darkfakes,' 'Foefakes,' 'Fanfakes,' and 'Glowfakes': Morgan Wack, Christina Walker, Alena Birrer, Kaylyn Jackson Schiff, Daniel Schiff, and JP Messina systematically analyzed political deepfakes and developed a classification that categorizes them along key dimensions.
Recently published work from colleagues Morgan Wack (postdoc at University of Zurich) & Joey Schafer (UW PhD candidate) showing how state election policies that delayed vote counting fueled rumoring and conspiracy theorizing around the 2020 election: blogs.lse.ac.uk/usappblog/20...
The 2020 US election shows how state election policies can fuel conspiracy theories about voting write @morganwack.bsky.social of @ikmz.bsky.social and @schafer.bsky.social of @uwnews.uw.edu
blogs.lse.ac.uk/usappblog/20...
How often do you see papers that suggest easy policies that could reduce electoral misinformation? Here's one I worked on with a great team out of UW and led by @morganwack.bsky.social and @schafer.bsky.social
Thrilled to finally see this paper out in print several years after @schafer.bsky.social and I started this project alongside @ikennedy.bsky.social, @beeeeeers.bsky.social, @emmaspiro.bsky.social & @katestarbird.bsky.social! Unfortunately the detrimental policies we discuss remain relevant.
Legislating Uncertainty: New paper about the 2020 election, showing how laws in certain states (specically laws that delayed the counting of mail-in ballots) increased uncertainty about election results and contributed to rumoring about election integrity: onlinelibrary.wiley.com/doi/10.1111/...
Proud to have co-led this paper with @morganwack.bsky.social (and other coauthors @ikennedy.bsky.social @beeeeeers.bsky.social @emmaspiro.bsky.social @katestarbird.bsky.social) looking at the impacts of state-level election laws on uncertainty and election integrity rumors!
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds
A study of a propaganda site with ties to Russia shows that using AI allows propagandists to dial up the volume of their content without sacrificing persuasiveness. The authors call for action to combat the threat. In PNAS Nexus: academic.oup.com/pnasnexus/ar...
A study of a Russian-backed propaganda outlet finds that AI is already being used to enhance messaging and expand disinformation campaigns, raising concerns about its growing impact on global influence operations.
In @sciencex.bsky.social: phys.org/news/2025-04...
Here is the link to the full (open source) paper! 🔗
academic.oup.com/pnasnexus/ar... We welcome feedback & potential collaboration focused on how to counter emerging AI-driven disinformation campaigns!
Finding Three 📝: Even with the shift to AI, the persuasive potential and credibility of the articles persisted. This finding suggests that even in rapid scaling article production the website did not need to sacrifice its perceived authenticity or potential impact. 6/
Finding Two 📊: AI-use corresponded with greater topic breadth. By rewriting stories, the website covered more diverse subjects (from gun crime to the Ukraine invasion). Prompt leaks also suggest use of AI to rate potential materials by their alignment with campaign goals. 5/
Finding One 📈: AI use significantly increased the quantity of disinformation. This aligns with the idea that generative models reduce the cost/time of writing, editing, and curating. Once the site adopted LLM tools, weekly post counts soared. 4/
We focus on a site identified by the Clemson Forensics Hub that presented itself as a genuine U.S. news outlet but which was actually part of a Russian-affiliated influence operation. By pinpointing a transition away from human-editing to LLM-edited content, we show: 3/
There have been growing concerns about the use of large language models (LLMs) in the production of disinformation, but real-world evidence has been difficult to track. Our paper provides a direct look at a Russian-linked campaign which used AI tools to target Americans. 2/
🚨 Excited to see our new paper out at @pnasnexus.org w/@pwarren.bsky.social, Darren Linvill, & Carl Ehrett!
Using data from a Russia-backed influence operation running puppet website DCWeekly, we show how LLMs are being used to scale global disinfo campaigns: 1/ 🧵
academic.oup.com/pnasnexus/ar...
Excited to share our new article in the American Journal of Political Science (@ajpseditor.bsky.social) with @sborwein.bsky.social, @rmichaelalvarez.bsky.social, @bartbonikowski.bsky.social, & Peter Loewen
onlinelibrary.wiley.com/doi/10.1111/...
📢 #RwandaClassified : la désinformation des autorités rwandaises persiste.
Alors que le conflit en RDC s’intensifie, les réseaux de Kagame restent actifs : la guerre au Nord-Kivu et le trafic de minerais restent des sujets tabous pour le régime rwandais. 🔍
forbiddenstories.org/fr/actualite...
Thrilled to share my new publication w/ @morganwack.bsky.social & Kevin Aslett in Social Science Quarterly: “Silence in the Stands: Assessing the Impact of Russian State-Linked Sportswashing on Online Fan Behavior Following the Full-Scale Invasion of Ukraine.” onlinelibrary.wiley.com/doi/10.1111/...
How right wing media is like improv theater. My coauthor @danielletomson.bsky.social and I are really proud of this piece which builds upon ~10 years of research at UW studying the participatory nature of rumors/disinformation and Danielle’s dissertation studying right-wing influencers for 5+ years.
⏰Another opening for a PhD position in our (w @morganwack.bsky.social and @esserfrank.bsky.social ) SNF project on political social media influencers! 🥳 If you are into computational methods, social media, and political communcation, we are looking for you 🔎 All details here: tinyurl.com/44arrawh
Just published a Nature comment highlighting a few of the rumors our UW team expects to see going into the Nov 5 election — from rumors that falsely frame election errors as impactful and intentional to rumors about "non citizen voters" and "suspicious behaviors". www.nature.com/articles/d41...