In collaboration with @ryanmoore.bsky.social @fangjingtu.bsky.social and Dr. Jeff Hacock, and supported by Stanford Social Media Lab, and @stanfordcyber.bsky.social.
18.04.2025 21:29 β π 0 π 1 π¬ 0 π 0@harryyan.bsky.social
Postdoc at Stanford Social Media Lab, Cyber Policy Center. Incoming AP @TAMUComm. PhD*2 in Informatics @IULuddy + Media Sciences @IUMediaSchool. @KnightFdn @OsoMe_IU Fellow. @ICR_IU Researcher. #PublicOpinion #Tech #GenAI #Bots #MediaEffects
In collaboration with @ryanmoore.bsky.social @fangjingtu.bsky.social and Dr. Jeff Hacock, and supported by Stanford Social Media Lab, and @stanfordcyber.bsky.social.
18.04.2025 21:29 β π 0 π 1 π¬ 0 π 0π Big picture:
This study shows we should focus on building what we call digital strength:
a holistic skillset for navigating AI-mediated information environments--
Focused not just on detection skills
But also on cultivating open-minded thinking and evidentiary judgment (10/10)
π― Policy and design takeaway:
Itβs not enough to teach people how to spot AI.
We also need to help them know when to trust authentic content.
Effective interventions must combine GenAI literacy, cognitive reflection training, and demographic targeting. (9/)
π‘ But thereβs hope.
Two factors helped:
π§ Actively Open-Minded Thinking (AOT):
A cognitive tendency to consider evidence that challenges oneβs prior beliefs.
π GenAI knowledge:
Factual understanding of generative AI.
AOT especially helped restore trust in real imagesβnot just spot synthetics(8/)
π₯ Whoβs most vulnerable?
Older adults: more likely to doubt authentic images
Women: showed a larger accuracy gap than men
Partisans: more likely to doubt real images that conflict with their beliefs
#GenAI is amplifying existing digital and partisan divides. (7/)
π Why does this matter?
Because trust in authentic political imagery is eroding.
This isnβt just about deceptionβitβs about undermining visual evidence itself, leading to a "liarβs dividend":
real images get dismissed as fake. (6/)
π Key finding:
Participants over-attributed AI generation, labeling nearly 60% of all images as syntheticβeven though only half were.
This "AI attribution bias" leads to:
β
Higher accuracy detecting synthetic images
β Lower accuracy recognizing authentic images (5/)
ποΈ We ran a large pre-registered experiment with 1,800 U.S. adults.
Participants evaluated political images balanced by party lean (pro-Dem vs. pro-Rep) and image type (authentic vs. AI-generated)β using actual images that circulated online during the election. (4/)
The answer is...Not exactly.
β οΈ BUT our study shows a different threat:
People have become suspicious of real images too.
Authentic visual evidence is no longer taken for granted. (3/)
π³οΈ During the 2024 U.S. presidential election, many #GenAI AI-generated political images appeared on social media.
But did voters mistake them for authentic imagery? (2/)
βDetecting Synthetic, Doubting Authentic: AI Attribution Bias for Political Imageryβ
π Full preprint: osf.io/preprints/os...
π§΅ Hereβs what we found about how #GenAI is reshaping trust in political visuals during elections: (1/)
IU's Observatory on Social Media defends citizens from online manipulation β the opposite of censorship
osome.iu.edu/research/blo...
One downside of submitting articles to multiple divisions is ending up with a lot more reviews to handle... Looking forward to seeing everyone in Denver next year! #ICA
05.12.2024 21:34 β π 1 π 0 π¬ 0 π 0An interesting paper about AI fact-checking from @matthewdeverna.com @harryyan.bsky.social @yang3kc.bsky.social @fil.bsky.social
04.12.2024 21:05 β π 16 π 5 π¬ 0 π 0