Cela me rappelle un exposé www.ens.psl.eu/agenda/works... Plus précisément gauthierroussilhe.com
28.12.2025 20:55 — 👍 2 🔁 0 💬 0 📌 0@mingmenggeng.bsky.social
Postdoc at ENS-PSL & CNRS-Lattice, working on LLMs: https://www.mingmenggeng.com/ SUSTech -> École Polytechnique -> SISSA -> ENS-PSL
Cela me rappelle un exposé www.ens.psl.eu/agenda/works... Plus précisément gauthierroussilhe.com
28.12.2025 20:55 — 👍 2 🔁 0 💬 0 📌 0C'est un peu hors sujet, mais savoir un peu de latin et de grec, ça aide parfois à trouver un bon acronyme pour un modèle ou une méthode 😂
23.11.2025 15:30 — 👍 0 🔁 0 💬 0 📌 0Many thanks again to my collaborators! Looking forward to meeting more people at upcoming conferences!
11.08.2025 11:20 — 👍 1 🔁 0 💬 0 📌 0(--) Are Large Language Models Chameleons? An Attempt to Simulate Social Surveys arxiv.org/abs/2405.19323 (on another topic, oral presentation at ESRA)
And one new paper I‘ve mentioned to many people:
(5) code_transformed: The Influence of Large Language Models on Code arxiv.org/abs/2506.12014
(3) LLM as a Broken Telephone: Iterative Generation Distorts Information aclanthology.org/2025.acl-lon...
(4) Wikipedia in the Era of LLMs: Evolution and Risks arxiv.org/abs/2503.02879
(1) Human-LLM Coevolution: Evidence from Academic Writing aclanthology.org/2025.finding...
(2) The Impact of Large Language Models in Academia: from Writing to Speaking aclanthology.org/2025.finding...
Another unforgettable summer!
I was glad to present some of my recent work on "the impact of LLMs in society" at ACL (*3+1), IC2S2 (*2), ESRA, Youth in HD, and ICSSI.
Here are the papers and posters:
hard to say arxiv.org/abs/2503.02879
02.05.2025 15:42 — 👍 1 🔁 0 💬 0 📌 0Previous work:
(1) Is ChatGPT Transforming Academics' Writing Style arxiv.org/abs/2404.08627
(2) The Impact of Large Language Models in Academia: from Writing to Speaking arxiv.org/abs/2409.13686
[New preprint] Human-LLM Coevolution: Evidence from Academic Writing arxiv.org/abs/2502.09606
Hint 1: To delve or not to delve, that is the intricate question!
Hint 2: A short and easy-to-read paper!
Still the word frequency in arXiv abstracts! 👇👇👇
This translation has some issues 😂 ChatGPT does better. "Fly Over Southern University of Science and Technology: One Continuous Shot Covering 2,970 Mu in 11 Minutes · Double First-Class · SUSTech · Giant Campus · Aerial Campus View" PS: 1 mu ≈ 0.165 acres
19.12.2024 00:46 — 👍 1 🔁 1 💬 0 📌 0bigger😉 www.bilibili.com/video/BV17J4...
18.12.2024 20:00 — 👍 0 🔁 0 💬 1 📌 1Interesting, I know all four students in the video 😂 It was clearly filmed between 2013 and 2017, and the campus looks quite different now 🧐
18.12.2024 18:09 — 👍 1 🔁 0 💬 1 📌 0You can find a little if you search her name 😬 My guess is that not many Chinese have moved here from X
14.12.2024 20:08 — 👍 1 🔁 0 💬 0 📌 0Probably the only one that mentions her name so far:
bsky.app/profile/yili...
More discussion on X:
x.com/sunjiao123su...
Almost no discussion about Rosalind Picard here. Can I assume that most Chinese AI researchers are still on Twitter/X?
14.12.2024 16:58 — 👍 8 🔁 1 💬 2 📌 0We are more interested in the density of LLM-style texts and its relative value (comparisons between categories and over time) than in establishing how many people are using LLMs – this can be estimated with the help of questionnaires, and it is not possible to get an accurate estimate only based on simulated data.
And, in an earlier paper 🧐😎 arxiv.org/abs/2404.08627
04.12.2024 17:33 — 👍 1 🔁 0 💬 0 📌 0Interesting work! In addition to what you mentioned, we also noticed that more LLM-style words have started to appear in the presentations of ML conferences: arxiv.org/abs/2409.13686
04.12.2024 17:24 — 👍 1 🔁 0 💬 1 📌 0Maybe it's hard to define the previous distribution? (not PI, just intuition) 👀 "All happy families are alike; each unhappy family is unhappy in its own way."
04.12.2024 09:12 — 👍 0 🔁 0 💬 0 📌 0