Melanie I Stefan's Avatar

Melanie I Stefan

@melanieistefan.bsky.social

Computational Neurobiologist at Medical School Berlin. Poster child of failure. She/her

355 Followers  |  658 Following  |  700 Posts  |  Joined: 10.08.2023  |  2.0099

Latest posts by melanieistefan.bsky.social on Bluesky

Customer support AI bots are their own circle of hell. This particular one just suggested I contact customer support.

10.11.2025 14:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
DNA scientist James Watson has a remarkably long history of sexist, racist public comments β€œPeople say it would be terrible if we made all girls pretty,” he said in 2003. β€œI think it would be great.”

Hey folks, as news of Watson's demise spreads, please don't set aside his weighty legacy of misogyny and racism. He was truly among the worst of us. www.vox.com/2019/1/15/18...

07.11.2025 19:44 β€” πŸ‘ 2329    πŸ” 897    πŸ’¬ 90    πŸ“Œ 217

That’s a weird way to spell β€œabout an applicant’s race and gender”

07.11.2025 06:46 β€” πŸ‘ 11    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Also paging @danny14.bsky.social @prubisch.bsky.social

06.11.2025 10:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hi hello yes please

05.11.2025 22:10 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Ist jemand ein Rassist, nur weil er rassistisch denkt, rassistisch spricht und rassistisch handelt? Wir bleiben dran!

04.11.2025 11:34 β€” πŸ‘ 31    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

The amount of AI generated art in slides at this conference, primarily used by older scientists, is killing me. Scientists please. Don’t use these ai platforms to make your figures or slides. They look bad and I have yet to see them meaningfully improve the message of talks.

31.10.2025 03:09 β€” πŸ‘ 1773    πŸ” 362    πŸ’¬ 39    πŸ“Œ 31

Fine, but arguably if writing things that nobody ever reads is part of our job description, we should question why they need to be written in the first place. The production of writing is not an end in itself. If it doesn’t serve either the writer or the reader, why do it at all?

29.10.2025 17:48 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

So cool!

29.10.2025 15:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

[PS] What is a good PhD? I think it’s one that only that particular person could have done, one that uses their unique and weird combination of skills and interests and passions. What makes a student great is exactly the ways in which they are different from a chatbot. And I wish they knew that.

29.10.2025 10:01 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

One thing I have learned in my (very short) stint as a learner of Dutch is that the Dutch are quite tolerant with respect to the β€œr” sound. (Though maybe my teacher was just saying that to be nice)

29.10.2025 09:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It makes me sad to think that there are people who think they have nothing to offer in that department, that their humanness can so easily be replaced. [Fin]

29.10.2025 09:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It also made me sad that they are essentially allowing themselves to be replaced by an Averages Machine. When what makes people interesting are the ways in which we are different from the average, not necessarily better, just unique. Our quirks. Our weird nerdy interests. Our individual stories [17]

29.10.2025 09:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I was listening to a podcast a while ago about people who used chatbots to do their messaging on online dating apps. Aside from the question of why you’s want to be dating if you think β€œhaving a conversation with another human person” is a tedious task better to be outsourced to machines. [16]

29.10.2025 09:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(Also, don’t @ me, I know that means and medians are not the same, and that it is technically possible most people are in fact better-than-average drivers, in much the same way that most of us have more-than-average fingers.) [15]

29.10.2025 09:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

GenAI being a statistical model will essentially give you an average. Averagely good writing. It’s interesting that so many people think it’s an improvement over what they can do. (Maybe the opposite of the often-quoted statistic that most people think they are better than average drivers) [14]

29.10.2025 09:35 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I wonder if students are being robbed of feelings of self-efficacy and competence. Definitely they are being robbed of the experience of struggling through an assignment like an essay and learning they can do it. Or struggling with a concept and finally wrapping their head around it. [12?]

29.10.2025 09:33 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And also that he felt like if he couldn’t use AI, his hands were tied in terms of quality. As if there wasn’t an arsenal of tried-and-tested non-AI methods to improve one’s writing. As if things like β€žhave a friend read itβ€œ or β€žread it back to yourself out loudβ€œ were somehow unavailable to him. [12]

29.10.2025 09:30 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And you know what, it was fine. Could have done with some polishing here and there. Could have done with a human proof-read. But absolutely within the quality range that I would expect from student work. And it’s kind of sad that he thought it was worth apologising for. [11]

29.10.2025 09:26 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

One student, apparently knowing that I am a sceptic, sent me an assignment with the caveat that the language would be quite bad. Because usually he uses AI to improve it, but in this case he did NOT (his capitals), so sorry in advance. [10]

29.10.2025 09:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It breaks my heart also that some students seem to think that, and seem to think that it’s not even a set of skills they can (and should) learn. They have big, beautiful, amazing brains and they don’t know what to do with them. [9]

29.10.2025 09:22 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Academics who have spent a literal decade training to read and understand texts and critically evaluate them think that PocketChad can do it better than them. This breaks my heart. [8]

29.10.2025 09:20 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

But also every time you let ChatGPT write your introduction or do your lit search or write your ethics proposal (yes, yes, I have seen it all), you are saying you think it can do better than you. [7]

29.10.2025 09:18 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

After all, you can ask stupid questions to a chatbot. A chatbot doesn’t judge. (Turns out that particular Postdoc also doesn’t judge, except she judges you for using a chatbot. And rightly so.) [6]

29.10.2025 09:16 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But it’s more than a lack of confidence in other people’s abilities. It’s also a lack of confidence in their own. Maybe my colleague felt a bit queasy about having a young early-career scientist explain thing that were outside his expertise. [5]

29.10.2025 09:14 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I also noticed it when my postdoc hung one of her posters next to her office door, and a colleague, rather than talking to her about it, had ChatGPT explain it to him. Like dude, the actual expert is sitting right there, and she is thoughtful and creative and amazing. But you choose PocketChad. [4]

29.10.2025 09:12 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Like, no discussion, no back and forth, no trying to make sense of what had come so far in the discussion. (Mind you, a discussion between experts in their field. Who might be wrong sometimes, sure, but who have a far deeper understanding than cyber-Chad over there.) [3]

29.10.2025 09:09 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I first noticed it as a lack of confidence in other people’s expertise. Like, we would have a discussion on some science (our science) question, and some colleagues would make excellent and carefully considered arguments. And then someone would butt in and just paste what ChatGPT said. [2]

29.10.2025 09:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A little thread on how the crisis around GenAI in academia is also a crisis of confidence, and it makes me quite sad. [1/n]

29.10.2025 09:03 β€” πŸ‘ 14    πŸ” 9    πŸ’¬ 2    πŸ“Œ 3

Chatbots β€” LLMs β€” do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re β€œright” it’s because correct things are often written down, so those patterns are frequent. That’s all.

19.06.2025 11:21 β€” πŸ‘ 36986    πŸ” 11416    πŸ’¬ 639    πŸ“Œ 967

@melanieistefan is following 20 prominent accounts