Last week I presented my latest work at the Findings Poster Session of ACL 2025 in Vienna!
If you missed it, check it out ๐ฅ
michelepapucci.github.io/blog/finding...
@mpapucci.bsky.social
@mpapucci_ on X.
Last week I presented my latest work at the Findings Poster Session of ACL 2025 in Vienna!
If you missed it, check it out ๐ฅ
michelepapucci.github.io/blog/finding...
michelepapucci.github.io/blog/paper-a...
New blog post about our latest paper, accepted at Findings of ACL.
๐ฃ #clicit2025 paper submission deadline extension: 16/06/2025! ๐ฃ
03.06.2025 13:22 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 08/ TL:DR;
๐จ State-of-the-art Detectors today are too shallow
๐ A bit of style alignment makes them crumble
๐ง We need stronger benchmarks
๐ We develop a way to create hard, in-domain texts for making and evaluating the next generation of more robust and reliable MGT Detectors
7/ What about Humans?
Human performance was unaffected: they performed poorly in detecting machine-generated text (around 50% accuracy in a binary task) both before and after our alignment.
6/ We tested a bunch of state-of-the-art detectors:
- ๐ต๏ธ Mage
- ๐ฏ Radar
- ๐ LLM-DetectAIve
- ๐ Binoculars
- Two domain-specific detectors trained by us: a Linear-SVM and a RoBERTa.
The most robust detector, for our type of attack, was Radar.
5/ We tested two ways of selecting texts for alignment, a random one and a linguistically motivated one. The latter proved better for aligning specific feature distribution of an LLM to the humans', but the former seemed to work better in dropping detector accuracy.
03.06.2025 13:22 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 04/ We tested on two domains (News and Abstracts), with two families of models (Llama and Gemma). Detectors run on text generated by the aligned models dropped up to 60% in performance.
03.06.2025 13:21 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 03/ Why does it work?
Most detectors rely on shallow stylistic cuesโword length, punctuation patterns, and sentence structure. Aligning LLMs to human style shifts the model's writing style towards humans', and Detectors canโt keep up.
2/ We introduce a simple pipeline:
We fine-tune LLMs via Direct Preference Optimization (DPO), using human-written and machine-generated text pairs, marking the former as the preferred. The goal is to shift LLMs' writing style towards humans.
๐งต1/ Machine-Generated Text (MGT) detection is failing
Our paper, accepted at Findings of ACL 2025, shows that LLMs can fool generated-text detectors.
arxiv.org/abs/2505.24523
Andrea Pedrotti, Cristiano Ciaccio, @alessiomiaschi.bsky.social @gpucce.bsky.social, Felice Dell'Orletta, Adrea Esuli
itch.io/jam/inkjam-2...
Our submission for the #inkjam is now up and ready to be played and rated!
Let us know what you think of our little ugly game made in a few hours ahahah