LK Seiling's Avatar

LK Seiling

@lkseiling.bsky.social

DE/EN. ๐Ÿ“ Potsdam / Berlin coordination of @dsa40collaboratory.bsky.social, various research at @weizenbauminstitut.bsky.social among other things: http://zusammenfuergleichstellung.de

196 Followers  |  291 Following  |  57 Posts  |  Joined: 03.01.2024  |  2.4743

Latest posts by lkseiling.bsky.social on Bluesky

There have been increasingly shrill accusations against the EU over its digital legislation, based on accusations of "censorship" by defenders of "free speech" -- including, so it appears, the right to peddle an AI app that seemingly produces child sexual abuse material (CSAM).
1/9

05.02.2026 19:20 โ€” ๐Ÿ‘ 65    ๐Ÿ” 33    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users โ€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users โ€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and
Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! ๐Ÿคฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industryโ€™s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 โ€” ๐Ÿ‘ 3676    ๐Ÿ” 1856    ๐Ÿ’ฌ 109    ๐Ÿ“Œ 373

Anyone interested in drafting an Art. 40(4) DSA access request to make use of this moment? Looks like a great chance to figure out, how the @grok account fits into X' internal governance/moderation structures.

13.01.2026 21:48 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Why Are Grok and X Still Available in App Stores? Elon Muskโ€™s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other โ€œnudifyโ€ appsโ€”but continue to host X and Grok.

"Why Are Grok and X Still Available in App Stores?
Elon Muskโ€™s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other โ€œnudifyโ€ appsโ€”but continue to host X and Grok." www.wired.com/story/x-grok...

08.01.2026 20:56 โ€” ๐Ÿ‘ 16    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Grok turns off image generator for most users after outcry over sexualised AI imagery X to limit editing function to paying subscribers after platform threatened with fines and regulatory action

The Guardian doing inexplicable voluntary free public relations work for a corporation profiting from sexual abuse by headlining this as the image gen being "turned off". It isn't off: they've just monetised it. What the fuck are we doing here people, come on.

www.theguardian.com/technology/2...

09.01.2026 09:18 โ€” ๐Ÿ‘ 1016    ๐Ÿ” 341    ๐Ÿ’ฌ 32    ๐Ÿ“Œ 29

liked undressing any woman or child on the platform? throw us some cash and you can keep doing it

09.01.2026 12:24 โ€” ๐Ÿ‘ 104    ๐Ÿ” 29    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1
Fuse โ€“ 39C3: Power Cycles Streaming Live streaming from the 39th Chaos Communication Congress

Gleich gehts los auf dem #39c3: Simone und Jรผrgen ziehen in โ€žHacking Karlsruhe - 10 years laterโ€œ nach zehn Jahren GFF Bilanz. Hier gehts zum Stream: streaming.media.ccc.de/39c3/fuse

29.12.2025 10:52 โ€” ๐Ÿ‘ 27    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Video thumbnail

#Trump has been accusing the #EU of #censorship via its #DigitalServicesAct, but is any of what they are saying true?

Let's investigate ๐Ÿงต๐Ÿ”ฝ
#EUpol #USpol

26.12.2025 23:19 โ€” ๐Ÿ‘ 1    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Did you know that the new European Omnibus threatens research based on data donations?

We wrote an open letter highlighting the problematic amendment (see below).

Please consider reading and signing it. If the amendment goes through, this might well be the end of data donation research...

25.11.2025 11:10 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We recently concluded a special article series, โ€œSeeing the Digital Sphere: The Case for Public Platform Data,โ€ in collaboration with the Knight-Georgetown Institute, in which experts explored why access to public platform data is critical. Hereโ€™s a snapshot: (1/9)

17.11.2025 18:15 โ€” ๐Ÿ‘ 14    ๐Ÿ” 11    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Trotz Kritik: Ab wann die BW-Polizei eine Palantir-Software nutzen darf Trotz viel Kritik und einer Petition hat der Landtag das Polizeigesetz geรคndert und die Nutzung der Datenanalyse-Software Palantir beschlossen.

Die Grรผnen sind nur Bรผrgerrechtspartei, wenn sie nicht in der Regierung sind. www.swr.de/swraktuell/b...

13.11.2025 06:46 โ€” ๐Ÿ‘ 82    ๐Ÿ” 24    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1
Preview
In Critical Condition โ€“ How To Stabilize Researcher Data Access? | TechPolicy.Press Mark Scott and LK Seiling discuss the struggle for researcher access to social media data and an alternative future where transparency is seen as a civic good.

@lkseiling.bsky.social & @markscott.bsky.social plea in @techpolicypress.bsky.social for a new data access regime emerging from a "generation of decentralized platforms that treat data transparency not as a regulatory burden but as a civic and scientific good"

www.techpolicy.press/in-critical-...

12.11.2025 19:44 โ€” ๐Ÿ‘ 14    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The most transparent, de-gamified way to do social media would be a โ€œmore like this / less like thisโ€ interface that nobody else sees. The โ€œlikeโ€ is pointless here anyway, while sharing is everything.

01.11.2025 02:56 โ€” ๐Ÿ‘ 8    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Here are seven unusual financial agreements helping to drive the ambitions of the poster child of the A.I. revolution.

"Many of the deals OpenAI has struck โ€” with chipmakers, cloud computing companies and others โ€” are strangely circular. OpenAI receives billions from tech companies before sending those billions back to the same companies to pay for computing power and other services." www.nytimes.com/interactive/...

31.10.2025 10:57 โ€” ๐Ÿ‘ 538    ๐Ÿ” 203    ๐Ÿ’ฌ 38    ๐Ÿ“Œ 77

two for one, nice ๐Ÿ˜Ž

01.11.2025 05:18 โ€” ๐Ÿ‘ 462    ๐Ÿ” 56    ๐Ÿ’ฌ 13    ๐Ÿ“Œ 3
Preview
Abschiebung trotz Ausbildungsvertrag: Aus dem Bett geholt, ins Flugzeug gesetzt Rouaa und Ibrahim hatten ihre Ausbildungsvertrรคge unterschrieben und hรคtten damit bis zum Abschluss bleiben dรผrfen. Trotzdem wurden sie abgeschoben.

"Gegen vier Uhr morgens drangen Beยญamยญt:inยญnen in die Wohnung der syrischen Familie Seleman ein, fรผhrten die Geschwister Rouaa (24) und Ibrahim (28) ab und setzten sie in ein Flugzeug. Dabei sollten beide eine Ausbildung antreten. Der Flรผchtlingsrat Schleswig-Holstein nennt das โ€žBehรถrdenwahnsinnโ€œ.

24.10.2025 17:19 โ€” ๐Ÿ‘ 880    ๐Ÿ” 385    ๐Ÿ’ฌ 64    ๐Ÿ“Œ 49

Since Meta and TikTok have been asked to respond individually, achieving such harmonisation seems unlikelyโ€”unless both researchers and regulators actively push for it. I have high hopes ๐Ÿ˜‰

24.10.2025 12:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Researchers need a BASELINE STANDARD OF PUBLICLY ACCESSIBLE DATA: a minimum set of comparable, high-quality data from all platforms, along with robust quality checks to ensure validity.

24.10.2025 12:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

2) Burdensome tools
The data access tools provided under Article 40(12) are also inadequate. Both Meta and TikTok reportedly supply incomplete data with questionable accuracy.

24.10.2025 12:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Researchers need STANDARDISED APPLICATION FORMS and FAIR, TRANSPARENT TERMS ACROSS PLATFORMS. Without them, access to data will remain extremely resource-intensive, discouraging cross-platform research and keeping much of this work in a legal grey area.

24.10.2025 12:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

For example, both platforms request detailed information about researchersโ€™ qualifications, while Meta even asks for a date of birth and phone number. On top of that, researchers must agree to contradictory or restrictive terms just to apply for data access.

24.10.2025 12:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

1) Burdensome procedures
In practice, this means that the application processes set up by these platforms may violate the DSAโ€™s provisions. TikTokโ€™s application form reportedly includes around 40 required fields, while Metaโ€™s goes up to 50 - many without connection to the requirements in Art. 40(8)

24.10.2025 12:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The key terms here are:
1) burdensome procedures
2)burdensome tools

While the findings themselves are not public, let's take a closer look ๐Ÿงต๐Ÿ‘‡

24.10.2025 12:21 โ€” ๐Ÿ‘ 8    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Iโ€™ll be presenting this work at #CSCW2025 in Bergen on Tuesday at 2:30PM! We will be part of the session โ€œCore Concepts in Privacy Researchโ€ (in the Bekken room) chaired by @emtseng.bsky.social โ˜บ๏ธ

18.10.2025 22:45 โ€” ๐Ÿ‘ 16    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The parsing code and info on the schema based on Activity Streams 2.0 we suggested: gitlab.weizenbaum-institut.de/lukas.seilin...

@lionw.bsky.social's latest paper on data donations: doi.org/10.12758/mda... (follow him for more to come soon!)

10.10.2025 10:19 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

"Instagram head says company is not using your microphone to listen to you (with AI data, it wonโ€™t need to)" techcrunch.com/2025/10/01/i...

01.10.2025 23:19 โ€” ๐Ÿ‘ 13    ๐Ÿ” 8    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

das Policy Paper:
bsky.app/profile/weiz...

24.09.2025 16:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Fรผr alle, die keine Zeit fรผr ~36 Seiten und รผber 100 FuรŸnoten im Policy Paper haben, habe ich im Interview mit dem @weizenbauminstitut.bsky.social kurz erklรคrt worum es im #DSA geht, welche Rolle Forschungsdatenzugang (#DSA40) spielt und was schulterzuckende US-Plattformen damit zu tun haben โฌ‡๏ธ

24.09.2025 16:33 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Cover of the publication

Cover of the publication

๐Ÿ‘‰ Just published: "Empowering People in Online Spaces: Democracy and Well-being in Digital Societies" - Book of Abstracts of the Weizenbaum Conference 2025. #WIConf25

โžก๏ธ doi.org/10.34669/WI.... ๐Ÿ”—

10.09.2025 13:50 โ€” ๐Ÿ‘ 24    ๐Ÿ” 12    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3

Haven't read this paper yet, but something has been on my mind: because diffusion and LLMs were more "discovered" than "invented," the explanations given for how they work, even by folks in the industry building them, are assumptions. Those stories drive AI narratives, and they're usually wrong.

27.08.2025 22:07 โ€” ๐Ÿ‘ 56    ๐Ÿ” 14    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1

@lkseiling is following 20 prominent accounts