Come talk to me at EAAMO in Pittsburgh next week!
               
            
            
                30.10.2025 17:44 — 👍 0    🔁 0    💬 0    📌 0                      
            
         
            
        
            
            
            
            
            
    
    
    
    
            ...Fairness Through Unawareness (omitting group attributes) can significantly reduce outcome inequality!
...it is often possible to reduce outcome inequality without reducing accuracy!
...Logistic Regression with group attributes is particularly prone to exacerbating inequality!
               
            
            
                30.10.2025 17:43 — 👍 0    🔁 0    💬 1    📌 0                      
            
         
            
        
            
            
            
            
            
    
    
            
                             
                        
                Reconsidering Fairness Through Unawareness From the Perspective of Model Multiplicity | Proceedings of the 5th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
                
            
        
    
    
            New ACM EAAMO paper out, joint work with @nuriaoliver.com at @ellisalicante.org!
dl.acm.org/doi/10.1145/...
Against common belief (but in line with the emerging multiplicity literature), we show theoretically and empirically that for algorithmic tasks like predicting unemployment...
               
            
            
                30.10.2025 17:42 — 👍 2    🔁 0    💬 1    📌 0                      
            
         
            
        
            
            
            
            
            
    
    
            
                             
                        
                Theory of XAI Workshop
                Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...
            
        
    
    
            Interested in provable guarantees and fundamental limitations of XAI? Join us at the "Theory of Explainable AI" workshop Dec 2 in Copenhagen! @ellis.eu @euripsconf.bsky.social 
Speakers: @jessicahullman.bsky.social @doloresromerom.bsky.social @tpimentel.bsky.social
Call for Contributions: Oct 15
               
            
            
                07.10.2025 12:53 — 👍 8    🔁 5    💬 0    📌 2                      
            
         
            
        
            
            
            
            
            
    
    
            
                             
                        
                Bridging Prediction and Intervention Problems in Social Systems
                Many automated decision systems (ADS) are designed to solve prediction problems -- where the goal is to learn patterns from a sample of the population and apply them to individuals from the same popul...
            
        
    
    
            Very impressive and comprehensive piece of work on the challenges and opportunities of using data-driven algorithms for decision-making in society. Maybe not surprising given the all-star lineup
This will be a key point of reference for many years to come
arxiv.org/abs/2507.05216
               
            
            
                05.08.2025 11:18 — 👍 0    🔁 0    💬 0    📌 0                      
            
         
            
        
            
            
            
            
            
    
    
    
    
            If you assume there is a true distribution from which we can draw iid, then with enough data we can approximate it, so there can only be an epsilon-RCP because we are delta-close to the assumed true distribution?
               
            
            
                26.03.2025 17:07 — 👍 0    🔁 0    💬 1    📌 0                      
            
         
            
        
            
            
            
            
            
    
    
    
    
            Is the intuition correct that, if we draw iid from a true distribution, then if two models disagree enough, at least one is far from the true model. So we can improve the models and with more data, both models approximate each other and in the limit the true distribution?
               
            
            
                26.03.2025 17:04 — 👍 0    🔁 0    💬 1    📌 0                      
            
         
    
         
        
            
        
                            
                    
                    
                                            🎓 PhD student at the Max Planck Institute for Intelligent Systems
🔬 Safe and robust AI, algorithms and society
🔗 https://andrefcruz.github.io
📍 researcher in 🇩🇪, from 🇵🇹
                                     
                            
                    
                    
                                            The mysterious complexity of our life is not to be embraced by maxims.
                                     
                            
                    
                    
                                            Research Group Lead: Law, AI and Society 
CZS Institute for AI and Law
Cluster of Excellence for Machine Learning in Science
The University of Tübingen 
In-betweening: law, technology, society, convergence, dumplings, perfumes, Finland, Tübingen, Bavaria
                                     
                            
                    
                    
                                            Professor for Machine Learning, University of Tübingen, Germany
                                     
                            
                    
                    
                                    
                            
                    
                    
                                            Machine Learning tools for neuroscience @mackelab.bsky.social.
                                     
                            
                    
                    
                                            PhD student at @unituebingen.bsky.social and IMPRS-IS in  "Lifelong Reinforcement Learning" group.
Organizer of @ewrl18.bsky.social and @twiml.bsky.social- Tübingen Women in Machine Learning
                                     
                            
                    
                    
                                            PhD Student @ Max Planck Institute for Intelligent Systems
                                     
                            
                    
                    
                                            PhD student in AI at University of Tuebingen. 
Dreaming for a better world.
https://andrehuang.github.io/
                                     
                            
                    
                    
                                            Aspiring philosopher; tolerable human; "amusing combination of sardonic detachment & literally all the feelings felt entirely unironically all at once" [he/his]
                                     
                            
                    
                    
                                            Professor in Scalable Trustworthy AI @ University of Tübingen | Advisor at Parameter Lab & ResearchTrend.AI 
https://seongjoonoh.com | https://scalabletrustworthyai.github.io/ | https://researchtrend.ai/
                                     
                            
                    
                    
                                            Dichter, Denker, Universalgenie, Naturforscher, Geheimrat a.D., uvm. 
                                     
                            
                    
                    
                                            PhD Student in Machine Learning @unituebingen.bsky.social, @ml4science.bsky.social, @tuebingen-ai.bsky.social, IMPRS-IS; previously intern @vectorinstitute.ai; jzenn.github.io
                                     
                            
                    
                    
                                            Harvard CS PhD Candidate. Interested in algorithmic decision-making, data-centric ML, and applications to public sector operations
                                     
                            
                    
                    
                                            AI @ OpenAI, Tesla, Stanford
                                     
                            
                    
                    
                                            ELLIS PhD Student in ML at the University of Tübingen.
                                     
                            
                    
                    
                                            Professor for AI/ML Methods in Tübingen. Posts about Probabilistic Numerics, Bayesian ML, AI for Science. Computations are data, Algorithms make assumptions.
                                     
                            
                    
                    
                                            ELLIS & IMPRS-IS PhD Student at the University of Tübingen.
Excited about uncertainty quantification, weight spaces, and deep learning theory.
                                     
                            
                    
                    
                                            Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
                                     
                            
                    
                    
                                            Mathematician at UCLA.  My primary social media account is https://mathstodon.xyz/@tao .  I also have a blog at https://terrytao.wordpress.com/ and a home page at https://www.math.ucla.edu/~tao/