Thomas Davidson's Avatar

Thomas Davidson

@thomasdavidson.bsky.social

Sociologist at Rutgers. Studies far-right politics, populism, and hate speech. Computational social science. https://www.thomasrdavidson.com/

861 Followers  |  651 Following  |  71 Posts  |  Joined: 13.10.2023  |  2.2345

Latest posts by thomasdavidson.bsky.social on Bluesky

Post image

Last week the story was that TikTok censored anti-Trump/ICE/Pretti videos after the U.S. ownership change. We investigated with a large set of US TikTok data and found some interesting results, short thread...

04.02.2026 17:52 β€” πŸ‘ 213    πŸ” 90    πŸ’¬ 6    πŸ“Œ 16
Post image Post image

First 50 downloads are free if you use this link: www.tandfonline.com/eprint/AYC6H...

03.02.2026 15:08 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Structuring articulation: asylum applications and elite political communication on social media during the European refugee crisis How do political parties adapt their discourse when confronted with rapidly changing structural conditions? This study examines the relationship between asylum applications and elite political comm...

New paper examines the relationship between online political discourse and structural conditions.

European parties responded to national asylum numbers by increasing online attention, but content differed. Left-wing parties posted more in solidarity and right-wing parties shared more about crime.

03.02.2026 15:07 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

Hey sociologists, I'm organizing an ASA Methodology session on AI! Submissions are due by 2/25. Looking forward to a timely cross-method convo on emerging research best practices and disciplinary norms and ethics in August.

02.02.2026 17:49 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Post image

Very interesting research paper that shows that using AI with programming can significantly reduce mastery over topics. Perhaps unsurprising, but the lack of significant speed gains in this exercise are remarkable

www.anthropic.com/research/AI-...

31.01.2026 00:23 β€” πŸ‘ 177    πŸ” 58    πŸ’¬ 4    πŸ“Œ 6
Preview
Generative AI in Sociological Research: State of the Discipline Article: Generative AI in Sociological Research: State of the Discipline | Sociological Science | Posted January 20, 2026

Now out in Sociological Science

(How) do sociologists use GenAI for their research? Find out in our paper.

Written with @ajalvero.bsky.social @dustinstoltz.com and Marshall Taylor. Thank you to everyone who participated in the survey!!

20.01.2026 20:16 β€” πŸ‘ 42    πŸ” 17    πŸ’¬ 1    πŸ“Œ 1
Preview
How we built CoPE We just published the methodology behind CoPE. This is the model that powers Zentropi, and we think the approach might be useful for others working on policy-steerable classification systems. We had ...

We just published the methodology behind CoPE, our 9B parameter model that matches GPT-4o at content classification at 1% the size! The model is already open source, but now we're sharing our training technique. blog.zentropi.ai/how-we-built... 🧡 1/6

15.01.2026 18:51 β€” πŸ‘ 87    πŸ” 24    πŸ’¬ 2    πŸ“Œ 4
Preview
Measuring context sensitivity in artificial intelligence content moderation - Nature Human Behaviour Automated content moderation systems designed to detect prohibited content on social media often struggle to account for contextual information, which sometimes leads to the erroneous flagging of inno...

www.nature.com/articles/s41...

06.01.2026 21:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

On the topic of AI and social science research, the Research Briefing on my Nature Human Behaviour paper is now online. It's an accessible summary of the research, implications, and some behind-the-scenes commentary.

Thanks @gligoric.bsky.social for providing an expert opinion!

06.01.2026 21:06 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Particularly if academics block each other for engaging in legitimate discussions about contested issues

06.01.2026 18:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

On the consent front, I think the use of LLMs to create more bespoke, even "individualized" instruments raises new ethical questions that warrant discussion. Seeing how polarizing the topic has become, I expect we'll see a lot more acrimonious debate before any consensus emerges

06.01.2026 18:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Putting informed consent aside, there is a strong minimal risk argument for using LLMs to analyze publicly available documents (particularly open-access published research), given that such materials are already routinely ingested by LLMs, with and without researcher intervention.

06.01.2026 17:52 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

New paper in Social Science Computer Review 🚨

We conducted two experiments to understand the effects of reading AI summaries, focusing on historical events πŸ“œπŸ€–πŸ‘©β€πŸ’»

We found that AI improved factual recall, possibly due to post-training optimization

journals.sagepub.com/doi/10.1177/...

29.12.2025 20:45 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

New paper in Social Science Computer Review 🚨

We conducted two experiments to understand the effects of reading AI summaries, focusing on historical events πŸ“œπŸ€–πŸ‘©β€πŸ’»

We found that AI improved factual recall, possibly due to post-training optimization

journals.sagepub.com/doi/10.1177/...

29.12.2025 20:45 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This generally seems like a bad idea - particularly paying Qualtrics to do it.

However, there are various applications of synthetic data, or "silicon sampling", that are worth exploring. We discuss this in the context of the recent SMR special issue on gen AI: journals.sagepub.com/doi/abs/10.1...

16.12.2025 18:58 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

New paper in Nature Human Behaviour.

I use a conjoint experiment to test multimodal large language models (MLLMs) for context-sensitive content moderation and compare with human subjects. Methodologically, this demonstrates how social science techniques can enhance AI auditing. πŸ’»πŸ€–πŸ’¬

15.12.2025 15:03 β€” πŸ‘ 16    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0

Thanks, Rohan. Looking forward to catching up in Toronto in the spring!

15.12.2025 15:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Multimodal large language models can make context-sensitive hate speech evaluations aligned with human judgement - Nature Human Behaviour This study examines how multimodal large language models evaluate hate speech. Larger models can make context-sensitive decisions aligned with human judgement. However, pervasive demographic and lexic...

The paper is out now: nature.com/articles/s41...

You can read it without a paywall using this guest link: rdcu.be/eUIlm

15.12.2025 15:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Overall, these results show that MLLMs can make more context-sensitive moderation decisions than text-based classifiers. But these systems still make mistakes, and context can cut both ways, eliminating some biases while enabling others. Human oversight remains essential if deployed for moderation.

15.12.2025 15:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Additionally, some models are overtly biased and are particularly sensitive to visual identity cues (AI-generated profile pictures). This demonstrates how different data modalities lead to varying levels of algorithmic bias.

15.12.2025 15:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

When considering the identity of the author, some MLLMs make context-sensitive judgments comparable to human subjects. e.g., less likely to flag Black users for using reclaimed slurs, a common false positive. But the results also reveal less normative decisions regarding so-called "reverse racism".

15.12.2025 15:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

I find that MLLMs follow a consistent hierarchy of offensive language to humans and show similarities across other attributes. There is heterogeneity across models, particularly the smallest open-weights versions.

15.12.2025 15:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New paper in Nature Human Behaviour.

I use a conjoint experiment to test multimodal large language models (MLLMs) for context-sensitive content moderation and compare with human subjects. Methodologically, this demonstrates how social science techniques can enhance AI auditing. πŸ’»πŸ€–πŸ’¬

15.12.2025 15:03 β€” πŸ‘ 16    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0
Preview
Multimodal large language models can make context-sensitive hate speech evaluations aligned with human judgement - Nature Human Behaviour This study examines how multimodal large language models evaluate hate speech. Larger models can make context-sensitive decisions aligned with human judgement. However, pervasive demographic and lexical biases remain, and visual identity cues may amplify disparities.

Article by @thomasdavidson.bsky.social on multimodal LLMs and hate speech: larger models aligned with human judgment, but pervasive demographic and lexical biases remain, and visual identity cues may amplify disparities.

15.12.2025 14:32 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

new paper by Sean Westwood:

With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research

18.11.2025 19:15 β€” πŸ‘ 777    πŸ” 390    πŸ’¬ 41    πŸ“Œ 127
Post image

I'm recruiting multiple PhD students for Fall 2026 in Computer Science at @hopkinsengineer.bsky.social πŸ‚

Apply to work on AI for social sciences/human behavior, social NLP, and LLMs for real-world applied domains you're passionate about!

Learn more at kristinagligoric.com & help spread the word!

05.11.2025 14:56 β€” πŸ‘ 28    πŸ” 17    πŸ’¬ 0    πŸ“Œ 1

Sky News does a great job here of showing that right wing voices get outsized amplification on X. The why of it is complicated (we have upcoming work on this). But the fact if it is undeniable. The platform is also full of clickbaity bullshit. Not unrelated.

06.11.2025 15:12 β€” πŸ‘ 76    πŸ” 25    πŸ’¬ 1    πŸ“Œ 1
Post image Post image Post image

Happy to announce the UTM Sociology Speaker Series for 2025-2026 @utm-research.bsky.social @uoftsociology.bsky.social

27.10.2025 06:56 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ‡«πŸ‡· We are hiring πŸ‡«πŸ‡·

Assistant or Associate Professor Position in Computational Sociology @crestsociology.bsky.social @ipparis.bsky.social

Details here (please RT)
www.shorturl.at/E57le

20.10.2025 14:40 β€” πŸ‘ 47    πŸ” 45    πŸ’¬ 1    πŸ“Œ 1
Assistant Professor in Computational Sociology The Department of Sociology at Rutgers University, New Brunswick, seeks applications for a tenure-track position at the Assistant Professor level specializing in Computational Sociology.Β  The search i...

There is one week left to apply to join us at Rutgers! We're hiring an Assistant Professor in Computational Sociology as part of a cluster of new hires in data science and AI.

Applications are due next Wednesday, 10/15.

09.10.2025 14:11 β€” πŸ‘ 12    πŸ” 11    πŸ’¬ 0    πŸ“Œ 0

@thomasdavidson is following 19 prominent accounts