First page of paper "Embodied AI at the Margins: Postcolonial Ethics for Intelligent Robotic Systems".
Abstract: As AI-powered robots increasingly permeate global societies, critical questions emerge about their ethical governance in diverse cultural contexts. This paper interrogates the adequacy of dominant roboethics frameworks when applied to Global South environments, where unique sociotechnical landscapes demand a reevaluation of Western-centric ethical assumptions. Through thematic analysis of seven major ethical standards for AI and robotics, we uncover systemic limitations that present challenges in non-Western contexts such as assumptions about standardized testing infrastructures, individualistic notions of autonomy, and universalized ethical principles. The uncritical adoption of these frameworks risks reproducing colonial power dynamics in which technological authority flows from centers of AI production rather than from the communities most affected by deployment. Instead of replacing existing frameworks entirely, we propose augmenting them through four complementary ethical dimensions developed through a postcolonial lens: epistemic non-imposition, onto-contextual consistency, agentic boundaries, and embodied spatial justice. These principles provide conceptual scaffolding for technological governance that respects indigenous knowledge systems, preserves cultural coherence, accounts for communal decision structures, and enhances substantive capabilities for Global South communities. The paper demonstrates practical implementation pathways for these principles across technological life cycles, offering actionable guidance for dataset curation, task design, and deployment protocols that mitigate power asymmetries in cross-cultural robotics implementation. This approach moves beyond surface-level adaptation to reconceptualize how robotic systems may ethically function within the complex social ecologies of the Global South while fostering genuine...
                                                
    
    
    
    
            We'll be at #AIES2025 presenting Atmadeep's work on Postcolonial Ethics for Robots www.martimbrandao.com/papers/Ghosh... We:
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering
               
            
            
                18.10.2025 16:47 — 👍 2    🔁 1    💬 0    📌 0                      
            
         
            
        
            
            
            
            
                                                
                                            First page of the paper "LLM-Driver Robots Risk Enacting Discrimination, Violence, and Unlawful Actions".
Abstract: Members of the Human-Robot Interaction (HRI) and Machine Learning (ML) communities have proposed Large Language Models (LLMs) as a promising resource for robotics tasks such as natural language interaction, household and workplace tasks, approximating ‘common sense reasoning’, and modeling humans. However, recent research has raised concerns about the potential for LLMs to produce discriminatory outcomes and unsafe behaviors in real-world robot experiments and applications. To assess whether such concerns are well placed in the context of HRI, we evaluate several highly-rated LLMs on discrimination and safety criteria. Our evaluation reveals that LLMs are currently unsafe for people across a diverse range of protected identity characteristics, including, but not limited to, race, gender, disability status, nationality, religion, and their intersections. Concretely, we show that LLMs produce directly discriminatory outcomes—e.g., ‘gypsy’ and ‘mute’ people are labeled untrustworthy, but not ‘european’ or ‘able-bodied’ people. We find various such examples of direct discrimination on HRI tasks such as facial expression, proxemics, security, rescue, and task assignment. Furthermore, we test models in settings with unconstrained natural language (open vocabulary) inputs, and find they fail to act safely, generating responses that accept dangerous, violent, or unlawful instructions—such as incident-causing misstatements, taking people’s mobility aids, and sexual predation. Our results underscore the urgent need for systematic, routine, and comprehensive risk assessments and assurances to improve outcomes and ensure LLMs only operate on robots when it is safe, effective, and just to do so.
                                                
    
    
    
    
            Our paper on safety & discrimination of LLM-driven robots is out! doi.org/10.1007/s123...
We find LLMs are:
- Unsafe as decision-makers for HRI
- Discriminatory in facial expression, proxemics, security, rescue, task assignment...
- They don't protect against dangerous, violent, or unlawful uses
               
            
            
                17.10.2025 15:23 — 👍 1    🔁 0    💬 0    📌 1                      
            
         
            
        
            
            
            
            
            
    
    
    
    
            Hello world! We are CRAIL. Our goal is to contribute to Responsible AI, and to use it for civil society and empowering marginalized groups. 
Follow us to hear about risks and social impact of AI, critical examinations of AI fields, and new algorithms towards socially just and human-compatible tech.
               
            
            
                17.10.2025 10:49 — 👍 1    🔁 0    💬 0    📌 0