Walter Quattrociocchi's Avatar

Walter Quattrociocchi

@walter4c.bsky.social

Full Professor of Computer Science @Sapienza University of Rome. Data Science, Complex Systems

436 Followers  |  34 Following  |  37 Posts  |  Joined: 02.04.2024  |  1.9515

Latest posts by walter4c.bsky.social on Bluesky

Post image

LLMs don’t form judgments.
They skip straight to the answer.
No evaluation.
No grounding.
Just fluent output.
When generation bypasses judgment, knowledge becomes a performance.
Welcome to Epistemia.
PNAS commentary ⬇️

www.pnas.org/doi/10.1073/...

26.11.2025 09:27 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

New study on LLMs shows that while LLMs & humans converge on similar judgments of reliability of news media, they rely on very different underlying processes.

In delegating, are we confusing linguistic plausibility with epistemic reliability?

The age of "epistemia"

www.pnas.org/doi/epdf/10....

21.11.2025 10:36 β€” πŸ‘ 59    πŸ” 17    πŸ’¬ 2    πŸ“Œ 5
How LLMs generate judgments - Nature Computational Science Nature Computational Science - How LLMs generate judgments

How LLMs generate judgments

www.nature.com/articles/s43...

"driven by lexical and statistical associations rather than deliberative reasoning"

20.11.2025 17:40 β€” πŸ‘ 52    πŸ” 15    πŸ’¬ 0    πŸ“Œ 3
How LLMs generate judgments - Nature Computational Science Nature Computational Science - How LLMs generate judgments

πŸ“’Research Highlights out today! We highlight work by @walter4c.bsky.social, @matteocinelli.bsky.social, and colleagues on how LLMs generate judgments about reliability and political bias, and how their procedures compare to human evaluation. www.nature.com/articles/s43... #cssky

20.11.2025 18:30 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Data changed the info business model: confirmation β†’ echo chambers β†’ infodemics
LLMs drop cost of β€œknowledge-like” content to zero.
Result: Epistemia β€” when language sounds like knowledge.
Outsourcing shifts decisions from evidence β†’ plausibility
PNAS:https://www.pnas.org/doi/10.1073/pnas.1517441113

18.11.2025 08:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Grokipedia is not the problem.
It’s the signal.
What we’re seeing isn’t about AI or neutrality β€” it’s the rise of the post-epistemic web.
The question isn’t: is it true?
The question is: who made the model?

29.10.2025 08:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Together, these papers suggest a transformation:
β†’ Knowledge is no longer verified, but simulated
β†’ Platforms no longer host views, they shape belief architectures
β†’ Truth is not disappearing. It’s being automated, fragmented, and rebranded

29.10.2025 08:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Ideology and polarization set the agenda on social media - Scientific Reports Scientific Reports - Ideology and polarization set the agenda on social media

Paper 2 β€” Ideological Fragmentation of the Social Media Ecosystem
We analyzed 117M posts from 9 platforms (Facebook, Reddit, Parler, Gab, etc).
Some now function as ideological silos β€” not just echo chambers, but echo platforms.
www.nature.com/articles/s41...

29.10.2025 08:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Paper 1 β€” The Simulation of Judgment in LLMs
We benchmarked 6 large language models against experts and humans.
They often agree on outputs β€” but not on how they decide.
Models rely on lexical shortcuts, not reasoning.
We called this epistemia.
www.pnas.org/doi/10.1073/...

29.10.2025 08:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We studied both, in two recent papers on
@PNASNews
and
@PNASNexus
:
Epistemia β€” the illusion of knowledge when LLMs replace reasoning with surface plausibility
Echo Platforms β€” when whole platforms, not just communities, become ideologically sealed

29.10.2025 08:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Two structural shifts are unfolding right now:
Platforms are fragmenting into echo platforms β€” entire ecosystems aligned around ideology.
LLMs are being used to simulate judgment β€” plausible, fluent, unverifiable.

29.10.2025 08:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

#Grokipedia just launched.
An AI-built encyclopedia, pitched as a β€œneutral” alternative to Wikipedia.
But neutrality is not the point.
What happens underneath is.
πŸ‘‡

29.10.2025 08:05 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

timely, considering Grokpedia and all the related implications.

29.10.2025 07:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Don’t know your approach.
Ours assumes that to understand the perturbation, you first need to operationalize the task and compare how humans and models diverge.
That’s the empirical ground β€” not a belief about what LLMs β€œare.”

21.10.2025 05:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@geomblog.bsky.social @parismarx.com
@rebeccasear.bsky.social @wolvendamien.bsky.social

20.10.2025 17:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@emilybender.bsky.social l @garymarcus.bsky.social @jevinwest.bsky.social @mrjamesob.bsky.social @abeba.bsky.social l @katecrawford.bsky.social @floridi.bsky.social

20.10.2025 17:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

β€œLLMs don’t understand.”
Of course. That was never the point.
The point is: we’re already using them as if they do β€”
to moderate, to classify, to prioritize, to decide.
That’s not a model problem.
It’s a systemic one.
The shift from verification to plausibility is real.
Welcome to Epistemia.

20.10.2025 17:31 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Coming from misinfo/polarization,
we’re not asking what LLMs are.
We’re asking: what happens when users start trusting them as if they were search engines?
We compare LLMs and humans on how reliability and bias are judged.
That’s where the illusion epistemia begins.

19.10.2025 14:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes, we include recent works on evaluation heuristics and bias in LLMs.
Our focus is on how LLMs outputs simulate judgment.
We compare LLMs and humans directly, under identical pipelines, on the same dataset.
May rely is empirical caution.
The illusion of reasoning is the point (not the premise).

19.10.2025 14:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

Absolutely we build on that line.
What we address is how these dynamics unfold now, at scale, where reliability is operationalized.
The novelty isn’t saying β€œLLMs aren’t agents.”
It’s showing how and when humans treat them as if they were.
Plausibility replacing reliability. Epistemia.

19.10.2025 14:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thank you for sharing.
We explore the perturbation introduced when judgment is delegated to LLMs.
We study how the concept of reliability is operationalized in (moderation, policy, ranking).
Epistemia is a name for judgment without grounding.
IMHO it is already here.
(a new layer of the infodemic).

18.10.2025 13:15 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Prompt used for all LLMs when provided with the scraped HTML homepage.

Prompt used for all LLMs when provided with the scraped HTML homepage.

LLMs can mirror expert judgment but often rely on word patterns rather than reasoning. A new study introduces epistemia, the illusion of knowledge that occurs when surface plausibility replaces verification. In PNAS: https://ow.ly/ry7S50Xcv9b

16.10.2025 19:00 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Post image 14.10.2025 20:34 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Impact of Social Media on Society by Walter Quattrociocchi at the Department of Network and Data Science at the Central European University

Impact of Social Media on Society by Walter Quattrociocchi at the Department of Network and Data Science at the Central European University

πŸ“’Join us for a public lecture by @walter4c.bsky.social about the impacts of social media on society.

⛓️‍πŸ’₯For online attendants, please register here: bit.ly/3FomgkF

12.03.2025 13:38 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Evaluating the effect of viral posts on social media engagement - Scientific Reports As virality has become increasingly central in shaping information sources’ strategies, it raises concerns about its consequences for society, particularly when referring to the impact of viral news o...

6/ Curious about the details?
Read the full paper here: link.springer.com/article/10.1...

We hope this sparks new conversations about the value of attention in the digital age.

Let us know your thoughts! πŸ’¬

03.01.2025 14:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

5/ πŸ’‘ What does this mean?
In the attention economy, chasing virality is risky. Instead, building consistent, sustained engagement is key to forming lasting connections with users.

03.01.2025 14:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

4/ Rapid viral effects fade quickly, while slower, gradual processes last longer.
This suggests that collective attention is elastic and influenced by pre-existing engagement trends.

A "like" or viral post is often fleetingβ€”it doesn’t guarantee long-term impact.

03.01.2025 14:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/ Key findings:

Viral events rarely lead to sustained growth in engagement.
We identified two types of virality:
1️⃣ "Loaded" virality: The final burst after a growth phase, followed by a decline.
2️⃣ "Sudden" virality: Unexpected events that briefly reactivate user attention.

03.01.2025 14:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/ πŸ“Š We analyzed over 1000 European news outlets on Facebook & YouTube (2018-2023), using a Bayesian structural time series model.

Our goal: Understand the impact of viral posts on user engagement, from short-term spikes to long-term trends.

03.01.2025 14:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

1/ πŸŽ‰ First paper of 2025!

In the quantitative study of the attention economy, we asked a key question:
How much does a likeβ€”or a viral postβ€”truly reverberate?

Our new study, published in Scientific Reports, dives into this crucial topic. 🧡

03.01.2025 14:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@walter4c is following 20 prominent accounts