Tereza Blazkova's Avatar

Tereza Blazkova

@terezablazek.bsky.social

PhD Student in Social Data Science, University of Copenhagen AI & Society | Algorithmic Fairness | ML | Education Data Science https://tereza-blazkova.github.io/

81 Followers  |  173 Following  |  5 Posts  |  Joined: 29.12.2024  |  1.4895

Latest posts by terezablazek.bsky.social on Bluesky

Preview
SODAS Data Discussion 3 (Fall 2025) SODAS is delighted to host Daniel Juhรกsz Vigild and Stephanie Brandl for the Fall 2025 Data Discussion series!

Join us for a Data Discussion on Friday, November 7! ๐Ÿ“…

Daniel Juhรกsz Vigild will start by exploring how government use of AI impacts its trustworthiness, while Stephanie Brandl will examine whether LLMs can identify and classify fine-grained forms of populism.

Event๐Ÿ”—: sodas.ku.dk/events/sodas...

31.10.2025 13:12 โ€” ๐Ÿ‘ 3    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
From the ChatGPT community on Reddit: ChatGPT asked if I wanted a diagram of whatโ€™s going on inside my pregnant belly. Explore this post and more from the ChatGPT community

time to implement it into healthcare systems www.reddit.com/r/ChatGPT/co...

26.08.2025 05:40 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Cheers from Learning@Scale poster 033 ๐Ÿ˜‹

22.07.2025 11:56 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Loved working on this with @kizilcec.bsky.social, Magnus Lindgaard Nielsen, David Dreyer Lassen, and @andbjn.bsky.social , thank you for the collaboration, looking forward to future work!

21.07.2025 08:15 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

To learn more, check out our paper and TikTok-style video, or see my poster and talk this week in Palermo!
Paper and video : dl.acm.org/doi/10.1145/...
Poster session: learningatscale.acm.org/las2025/inde...
Lightning talk: fair4aied.github.io/2025/

21.07.2025 08:15 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Depending on the time point and fairness metric, we observe both alarming disparities and confidence intervals that include zero.

21.07.2025 08:03 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

While human behavior and the data describing it evolve over time, fairness is often evaluated at a single snapshot. Yet, as we show in our newly published paper, fairness is dynamic. We studied how fairness evolves in dropout prediction across enrollment and found that it shifts over time.

21.07.2025 08:02 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Large language models act as if they are part of a group - Nature Computational Science An extensive audit of large language models reveals that numerous models mirror the โ€˜us versus themโ€™ thinking seen in human behavior. These social prejudices are likely captured from the biased conten...

Happy to write this News & Views piece on the recent audit showing LLMs picking up "us versus them" biases: www.nature.com/articles/s43... (Read-only version: rdcu.be/d5ovo)

Check out the amazing (original) paper here: www.nature.com/articles/s43...

02.01.2025 14:11 โ€” ๐Ÿ‘ 13    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

@terezablazek is following 20 prominent accounts