Ezgi Korkmaz's Avatar

Ezgi Korkmaz

@ezgikorkmaz.bsky.social

Machine Learning Researcher. PhD in Machine Learning. ✨Researching Reinforcement Learning. Been at @UCL @GoogleDeepmind @UCBerkeley @ucberkeleyofficial.bsky.social @ucl.ac.uk Website: https://ezgikorkmaz.github.io/

281 Followers  |  0 Following  |  11 Posts  |  Joined: 21.06.2024  |  1.2118

Latest posts by ezgikorkmaz.bsky.social on Bluesky

It is quite difficult to navigate through unresponsiveness in the discussion. Do you think the current batch of assignments, i.e. approximately 12 papers per AC, was a good step, or do you think it should be around 40 or 50 to ensure the comparisons are less noisy across ACs?

22.05.2025 14:26 — 👍 0    🔁 0    💬 1    📌 0
Post image

I opened this poll and I will share the results here as well:
#ICML #ICML2025

06.04.2025 18:01 — 👍 4    🔁 0    💬 1    📌 1
Post image

If you are interested in large language models see my paper below on how we can uncover the biases learned by these models.

Link: neurips2023-enlsp.github.io/papers/paper...

#ReinforcementLearning #FoundationModels #DeepRL #DeepReinforcementLearning #ResponsibleAI #AIBias #LLMs #LanguageModels

11.02.2025 17:56 — 👍 3    🔁 0    💬 0    📌 0
Ezgi Korkmaz

Gave a lecture on Hoeffding's Inequality recently at #LMUMunich! Nice to have a chance to talk about foundations. Find my slides below:

Link: ezgikorkmaz.github.io

#Statistics #Probability #LMU #MachineLearning

02.02.2025 12:04 — 👍 3    🔁 0    💬 0    📌 0
Preview
GitHub - EzgiKorkmaz/adversarial-reinforcement-learning: Reading list for adversarial perspective and robustness in deep reinforcement learning. Reading list for adversarial perspective and robustness in deep reinforcement learning. - EzgiKorkmaz/adversarial-reinforcement-learning

If you are interested in deep reinforcement learning, I will share this repo here:

Link: github.com/EzgiKorkmaz/...

#ReinforcementLearning #SafeAI #Adversarial #Robust #DeepRL #robustRL #LanguageModels #AdversarialRL #AISafety #ExplainableAI #TrustworthyAI #ResponsibleAI #DeepReinforcementLearning

22.01.2025 15:59 — 👍 5    🔁 0    💬 0    📌 0
Post image

A recent paper I wrote introduces foundational analysis on deep reinforcement learning decision making and representations learnt by it.

Link: proceedings.mlr.press/v235/korkmaz...

#ReinforcementLearning #ICLR2025 #ACL2025 #NAACL2025 #NeurIPS2024 #ICML2025 #DeepRL #DeepReinforcementLearning

14.01.2025 14:15 — 👍 14    🔁 2    💬 0    📌 0
Post image

I wrote a recent survey about deep reinforcement learning. The paper is a compact guide to understand some of the key concepts in reinforcement learning.

Link: arxiv.org/pdf/2401.023...

#ReinforcementLearning #ICLR2025 #ACL2025 #NAACL2025 #NeurIPS2024 #ICML2025 #DeepRL #DeepReinforcementLearning

12.01.2025 16:21 — 👍 40    🔁 9    💬 1    📌 0


This paper provides the compact highlights of my recent work on generalization, adversarial perspective, robustness and safety in deep reinforcement learning! #NeurIPS2024
@neuripsconf.bsky.social

#ReinforcementLearning #SafeAI #AISafety #TrustworthyAI #ML #DeepRL

bsky.app/profile/ezgi...

13.12.2024 18:35 — 👍 3    🔁 0    💬 0    📌 0
Post image

The recent paper I wrote on deep reinforcement learning and generalization will appear #NeurIPS2024 @neuripsconf.bsky.social !

#ReinforcementLearning #DeepReinforcementLearning

10.12.2024 18:35 — 👍 1    🔁 0    💬 0    📌 0

If you are curious about deep reinforcement learning find the compact highlights of my recent papers in this new short piece:

#NeurIPS2024 @neuripsconf.bsky.social #NeurIPS24
#reinforcementlearning #AIsafety #AISecurity #ResponsibleAI #TrustworthyAI #RobustAI #DeepRL

bsky.app/profile/ezgi...

07.12.2024 12:18 — 👍 2    🔁 1    💬 0    📌 0
Post image

The paper on adversarial non-robustness is now online! This paper highlights what you should now about Robust Reinforcement Learning.

Adversarial Robust Deep Reinforcement Learning is Neither Robust Nor Safe
Link: openreview.net/pdf?id=EPa0u...

#NeurIPS2024
neuripsconf.bsky.social
#NeurIPS24

05.12.2024 22:46 — 👍 3    🔁 2    💬 1    📌 2