Natalie Sontopski's Avatar

Natalie Sontopski

@stromponsky.bsky.social

Researcher 🔮 More-Than-Human Design //STS // Feminism // Critical AI

139 Followers  |  290 Following  |  161 Posts  |  Joined: 02.09.2023  |  2.211

Latest posts by stromponsky.bsky.social on Bluesky

From industrial ruins to ecological renewal. Developing a toolkit for actionable interventions. | The 2026 conference of stsing e.V. Julia Hoffmann, Natalie Sontopski

If you are planning to attend the stsing conference in Bochum next March, may I point you to our case? We happily accept attendants and are looking for a fun, interdisciplinary crowd eager to discuss all things more-than-human, anthropocene and toolkits. Come and join us! stsing.org/before-ruins...

17.11.2025 15:08 — 👍 0    🔁 0    💬 0    📌 0

Für mich klingts allerdings wie Arbeit in Academia, am besten Mittelbau 😀 Da lehne ich mich gleich mal entspannt zurück, die Milliarden müssten dann jede Minute eintrudeln.

17.11.2025 12:30 — 👍 1    🔁 0    💬 0    📌 0
Baotou tailings dam, Inner Mongolia: Brown sludge shoots out of a pipe into a grey-brown wasteland stretching to the horizon, where tiny industrial buildings can be seen. Source: Liam Young/BBC, URL: https://www.bbc.com/future/article/20150402-the-worst-place-on-earth

Baotou tailings dam, Inner Mongolia: Brown sludge shoots out of a pipe into a grey-brown wasteland stretching to the horizon, where tiny industrial buildings can be seen. Source: Liam Young/BBC, URL: https://www.bbc.com/future/article/20150402-the-worst-place-on-earth

Lithium mining fields, Silver Peak, Nevada: An aerial photograph of a desert landscape, with what looks like a chessboard made up of swimming pools, the fields where lithium is mined, contrasting turquoise with the surrounding desert. Source: Doc Searly, Wikimedia, URL: https://en.wikipedia.org/wiki/Silver_Peak,_Nevada#/media/File:Chemetall_Foote_Lithium_Operation.jpg.

Lithium mining fields, Silver Peak, Nevada: An aerial photograph of a desert landscape, with what looks like a chessboard made up of swimming pools, the fields where lithium is mined, contrasting turquoise with the surrounding desert. Source: Doc Searly, Wikimedia, URL: https://en.wikipedia.org/wiki/Silver_Peak,_Nevada#/media/File:Chemetall_Foote_Lithium_Operation.jpg.

Differently coloured cargo containers in the port of Barcelona, stacked on top of each other like Lego bricks. Source: OneLoneClone/Wikimedia, URL: https://de.wikipedia.org/wiki/ISO-Container#/media/Datei:Puertobarcelona2.jpg.

Differently coloured cargo containers in the port of Barcelona, stacked on top of each other like Lego bricks. Source: OneLoneClone/Wikimedia, URL: https://de.wikipedia.org/wiki/ISO-Container#/media/Datei:Puertobarcelona2.jpg.

Meta’s Stanton Springs data center in Georgia: An industrial complex illuminated in blue and yellow at night, separated from a neighbouring forest by fences. Source: Peter Essick/Fast Company, URL: https://www.fastcompany.com/91396678/meta-georgia-data-center-stanton-springs.

Meta’s Stanton Springs data center in Georgia: An industrial complex illuminated in blue and yellow at night, separated from a neighbouring forest by fences. Source: Peter Essick/Fast Company, URL: https://www.fastcompany.com/91396678/meta-georgia-data-center-stanton-springs.

If you're looking for illustrations for "AI", don't use robots, glowing disembodied brains, or computer code in empty space. Here are a some alternative suggestions:

17.11.2025 10:45 — 👍 54    🔁 25    💬 1    📌 1

The relationship of eugenics and AI goes waaaay back. @timnitgebru.bsky.social did an execellent talk about this! www.youtube.com/watch?v=P7XT...

07.11.2025 10:12 — 👍 3    🔁 2    💬 0    📌 0

What hell feels like: Transcriping interviews and hearing your own, slowed down voice on repeat for hours. Have my spoken words any meaning? Can I not form at least one coherent sentence when talking to other people?

05.11.2025 09:59 — 👍 2    🔁 0    💬 0    📌 0

Most important lesson in the age of AI: "how do you develop the skills of discernment? This one is very very important: do not let the AI do the work for you before you have learned how to do it yourself. "

04.11.2025 10:57 — 👍 4    🔁 0    💬 0    📌 0

Große Empfehlung an alle, die sich eine Übersicht über digitale Technologien und deren komplexe und teilweise desaströse geopolitische, ökologische und kapitalistische entanglements im Anthropozän verschaffen möchten.

01.11.2025 10:48 — 👍 10    🔁 4    💬 1    📌 0

Weil hier vorwiegend männliche AI-Beobachter genannt werden, droppe ich mal dass es auch sehr viele kundige kritische AI-Beobachterinnen gibt, z.B. @emilymbender.bsky.social und @timnitgebru.bsky.social

01.11.2025 09:49 — 👍 7    🔁 0    💬 0    📌 0

When STEM discovers sociology but instead of giving credit, sells this as new research. I see this happening a lot in STEM and it makes me furious.

27.10.2025 07:08 — 👍 0    🔁 0    💬 1    📌 0
Preview
A humming annoyance or jobs boom? Life next to 199 data centres in Virginia Data centres were billed as a boon to Virginia’s economy. Now, residents are concerned about their impact on real estate and electricity costs.

Ever wondered what " your data is stored in the cloud" means? Contrary to real clouds, the digital cloud is based in massive noisy loud data centers and far less celestial - some would even describe living next to them as hell. www.bbc.com/news/article...

27.10.2025 07:03 — 👍 0    🔁 0    💬 0    📌 0

Content creation has become so cynical that homeless people are now pawns for allegedly "random acts of kindness". These content creators are very aware of their privacy but do not care for the privacy of a vulnerable and marginalized community.

23.10.2025 08:44 — 👍 1    🔁 0    💬 0    📌 0

I love stumbling on resources like that, because teaching is hard and inspiration is always welcome.
I am especially drawn to the unit about the iconic "More Work for Mother" from Ruth Schwartz Cowan.

17.10.2025 06:36 — 👍 2    🔁 0    💬 0    📌 0

Like all AI companions before, this too an overhyped (and overprices) gadget which will probably land forgotten in a cupboard somewhere, not a companion with whom you'll form a longlasting relationship for years to come.

17.10.2025 06:04 — 👍 1    🔁 0    💬 0    📌 0
Veranstaltungen

Ich freue mich: Am 4. November um 13.30 Uhr spreche ich an der HS Merseburg online über "Einmal Assistentin, immer Assistentin? Wie binäre Geschlechterperspetiven die Repräsentation von KI beeinflussen". Pssst: Es gibt lustige Gifs! Infos & Anmeldung hier: www.hs-merseburg.de/hochschule/p...

15.10.2025 06:25 — 👍 1    🔁 1    💬 0    📌 0
Preview
1 to 2 Starlink satellites are falling back to Earth each day

Starlink seems just another in a long line of Elon Musk projects which seemed to good to be true: 1-2 satellites are falling back to Earth each day which raises questions how this affects the delicate future of near Earth space.
earthsky.org/human-world/...

10.10.2025 07:39 — 👍 3    🔁 0    💬 0    📌 0
Post image 06.10.2025 22:24 — 👍 351    🔁 68    💬 1    📌 1
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 — 👍 3356    🔁 1713    💬 102    📌 308

i looked at the methodology for this and it is
a. sex addiction counseling group in texas did a surveymonkey and extrapolated the results to the entire us population which is the sort of research design that earns you an ff on an intro methods class (the extra f is for extra effort), and
b. p-hacked

03.10.2025 02:15 — 👍 9474    🔁 2496    💬 130    📌 139
Preview
“I want to build something”. A field study of pioneering communities, feminist speculation and Creative AI. | Proceedings of the 12th International Conference on Communities & Technologies

My paper on pioneering communitites, feminist speculation & Creative AI is available in the ACM library. This paper is part of my PhD and I am so happy that it got accepted & published (doing a PhD is hard as many of you probably know) and I hope it finds its audience 🤗 dl.acm.org/doi/10.1145/...

02.10.2025 13:28 — 👍 1    🔁 0    💬 0    📌 0
Preview
Disability Studies bedroht: Kritisch-emanzipatorische Wissenschaft schützen und stärken! • Hochschulen und Universitäten sind zentrale Akteurinnen bei der Umsetzung der UN-Behindertenrechtskonvention und der Förderung einer inklusiven, barrierefreien Gesellschaft. Der Abbau von Disability...

Damit Inklusion kein Fremdwort bleibt, müssen wir uns für den Erhalt einer emanzipatorischen, kritischen und diversen Forschungslandschaft einsetzen: weact.campact.de/petitions/di...

08.08.2025 19:52 — 👍 0    🔁 1    💬 0    📌 0

Consider also how few professional women feel they have any safe margin to allow for mistakes or mediocrity in their work (both risks of AI-generated output). Especially in male-dominated fields, women can’t afford to use a machine that shits the bed even 5% of the time

27.07.2025 01:31 — 👍 121    🔁 29    💬 3    📌 2
Post image

I did a thing and talked about my PhD research on pioneering communities and feminist speculations.

23.07.2025 10:52 — 👍 0    🔁 0    💬 0    📌 0

I'll be in Siegen next week for the Communities &Technologies conference to present my paper on Pioneering communities, feminist speculation and Creative AI. So, if you wanna discuss "doing speculation", Donna Haraway or just talk about making stuff, let me know!

17.07.2025 15:17 — 👍 2    🔁 0    💬 0    📌 0
Preview
From the OpenAI community on Reddit Explore this post and more from the OpenAI community

This is fascinating: www.reddit.com/r/OpenAI/s/I...

Someone “worked on a book with ChatGPT” for weeks and then sought help on Reddit when they couldn’t download the file. Redditors helped them realized ChatGPT had just been roleplaying/lying and there was no file/book…

16.07.2025 20:07 — 👍 8162    🔁 1953    💬 254    📌 691

All in all, a very kafka-esque experience of bureaucracy, ineffiency and frustration to quiet resignation.

14.07.2025 11:17 — 👍 0    🔁 0    💬 0    📌 0

Same thing with Google: I logged in and was asked to verify myself with the code I got via text. So I chose the option of backup Codes only to discover that Google needs me to verify this decision by sending a text to my phone. Which I dont have. Ergo cant verify shit.

14.07.2025 11:17 — 👍 0    🔁 0    💬 1    📌 0

This has happened twice to me already and I really dont know why nobody has designed a protocol for this case? Like how do you use 2 Factor Authentication if the source of identification is missing?

14.07.2025 11:17 — 👍 0    🔁 0    💬 1    📌 0

A user experience from hell is when your SIM card is deactivated because your phone is missing but your provider asks you to verify your identity by sending a text to your phone. Which you dont have. And therefore cant verify your identity this way.

14.07.2025 11:17 — 👍 0    🔁 0    💬 1    📌 0

I wonder if people who have used AI will be reflected enough to include similar disclaimers about their research methods for transparency.

01.07.2025 06:12 — 👍 0    🔁 0    💬 0    📌 0

Firefox extension for those who are interested.

29.06.2025 19:50 — 👍 3493    🔁 2095    💬 23    📌 42

@stromponsky is following 20 prominent accounts