Srravya's Avatar

Srravya

@srravya.bsky.social

socio-technical research and PhD underway @designinf.bsky.social researching data work, responsible AI interested in critical data/AI studies, tech + labour, feminist STS, HCI

451 Followers  |  522 Following  |  15 Posts  |  Joined: 25.10.2024  |  1.7344

Latest posts by srravya.bsky.social on Bluesky

CAHSS Network for the Study of Work | School of Social and Political Science

For those at @uoe-sps.bsky.social:

Feb 17th: CAHSS Network for the Study of Work

www.sps.ed.ac.uk/news-events/...

12.02.2026 21:54 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

We’re dropping the final four interventions with @meredithmeredith.bsky.social @grohmannrafael.bsky.social, Chenai Chair, and @chinasa.bsky.social.

In Reframing Impact: AI Summit 2026, we’re reframing the narratives that shape AI discourse, policy, and power today. 🧡

12.02.2026 17:17 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 5    πŸ“Œ 0

Does your university use ChatGPT Edu? Please send me a DM if you do. I have identified a potential data breach affecting students' data that has not yet been fixed and I'm trying to compile further examples from other universities.

Reposts appreciated!

12.02.2026 21:08 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Post image

Genealogies aren’t nostalgia, they’re accountability tools. Panel 2 asks how historical continuities can inform more critical approaches to data politics. Feb 18 uky.zoom.us/j/86848639105
#DataGovernance #STS

11.02.2026 19:30 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

🚲 Are you a delivery rider in Edinburgh?
Come to the next ROOMΒ΄s monthly gathering!

πŸ“© Any questions?
Just drop a message: info@workersobservatory.org

10.02.2026 19:18 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Text reads: 
Rethinking sustainable AI
Labour and environmental perspectives from the Global South
Beyond the label: repairing data work, rethinking AI
Despite recent advances in AI’s computational capabilities, data workβ€”the human labour required for training, fine-tuning, and evaluating AI systems - remains indispensable to AI production. Yet, data work is constituted as a routine and repetitive activity, with little scope for applying expertise and skill, and often conducted under unfair conditions of work. 

Sustainability kaleidoscope to shield predatory corporations fostering AI green economy 
As any mainstream concept and buzzword, sustainability might be used to overshadow a myriad of meanings and a part of them tailored to cover environmentally damaging activities sponsored by private corporations and companies worldwide: a phenomenon broadly known as greenwashing.

Text reads: Rethinking sustainable AI Labour and environmental perspectives from the Global South Beyond the label: repairing data work, rethinking AI Despite recent advances in AI’s computational capabilities, data workβ€”the human labour required for training, fine-tuning, and evaluating AI systems - remains indispensable to AI production. Yet, data work is constituted as a routine and repetitive activity, with little scope for applying expertise and skill, and often conducted under unfair conditions of work. Sustainability kaleidoscope to shield predatory corporations fostering AI green economy As any mainstream concept and buzzword, sustainability might be used to overshadow a myriad of meanings and a part of them tailored to cover environmentally damaging activities sponsored by private corporations and companies worldwide: a phenomenon broadly known as greenwashing.

CONTROVERSIES in the DATA SOCIETY
Seminar Series

Rethinking sustainable AI
Labour and environmental perspectives from the Global South
Friday, Feb 13
3:10pm - 5:30 pm

Srravya Chandhiramowuli and Dr Beatrice Bonami

Booking:
edin.ac/4tejumk

09.02.2026 13:00 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
CAHSS Network for the Study of Work With funding from the CAHSS Interdisciplinary Fund, we are pleased to announce the launch of the Network for the Study of Work (NSW).

Working at the University of Edinburgh? Researching work & labour (including labour markets, employment policy, ethnographies of work, digital tech, AI & automation, etc), and want to meet others? Well you're in luck!

Feb 17th 10am:

www.eventbrite.co.uk/e/cahss-netw...

06.02.2026 10:31 β€” πŸ‘ 9    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Preview
Gladys Mae West obituary: mathematician who pioneered GPS technology She made key contributions to US cold-war science despite facing huge barriers as a Black woman.

No joke: I got angry hate mail today for writing an obituary of a Black woman scientistβ€”because the person felt she did didn’t deserve the recognition.

Which just makes me want to share it again: www.nature.com/articles/d41...

06.02.2026 09:09 β€” πŸ‘ 46581    πŸ” 19091    πŸ’¬ 1338    πŸ“Œ 782

What does it mean to build a career in research and academia when the two seem to be in flux and under strain? We will discuss this in detail at this panel discussion in a couple of weeks. Please come along, February 18, 2.30PM! registration link below πŸ‘‡

04.02.2026 18:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Edinburgh City Councillors meet food delivery workers in Workers' Observatory workshop

Edinburgh City Councillors meet food delivery workers in Workers' Observatory workshop

🎯 Getting Gig Work on Scotland's Political Agenda

Food delivery workers joined the Workers' Observatory workshop this week to build political momentum towards fair and safe gig work in Scotland.

You can read the workers' principles and policy demands here: workersobservatory.org/documents/WO...

31.01.2026 11:51 β€” πŸ‘ 7    πŸ” 6    πŸ’¬ 0    πŸ“Œ 2
Preview
Inside Starbucks' supply struggles: AI glitches, scattered suppliers and sandwich shortages Four Starbucks CEOs over five years have blamed lost sales on its struggle to keep its thousands of U.S. stores reliably stocked with coffeehouse essentials like milk, pastries, and cup lids.

"Starbucks announced the rapid rollout of a tool called 'automated counting' that... aimed to replace hand-counts of some products with automated ones that are faster and more accurate... But the app frequently miscounts and mislabels items"
www.reuters.com/legal/legali...

27.01.2026 15:57 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
About the PhD: 
Audits and evaluation of AI systems β€” and the broader context that AI systems operate in β€” have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent β€œground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD: Audits and evaluation of AI systems β€” and the broader context that AI systems operate in β€” have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent β€œground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

are you displeased with today’s AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me

apply here: aial.ie/hiring/phd-a...

pls repost

15.01.2026 11:55 β€” πŸ‘ 190    πŸ” 141    πŸ’¬ 6    πŸ“Œ 12
Preview
Immigration Officers Descend on Meta Data Center, Arrest Drivers Federal immigration officers targeted a construction site in rural Louisiana where Meta Platforms Inc. is building its largest data center, leading to the arrest of two individuals, according to local...

In the most desolate, American techno-fascist story one can imagine: ICE descends on a data center construction site in Louisiana to abduct workers.

www.bloomberg.com/news/article...

15.01.2026 16:06 β€” πŸ‘ 79    πŸ” 40    πŸ’¬ 4    πŸ“Œ 4

In making work visible (1995), Lucy Suchman writes: "Work has a tendency to disappear at a distance, such that the further removed we are from the work of others, the more simplified, often stereotypes, our view of their work becomes"

11.01.2026 10:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
A bright yellow coloured children's book titled "ABC Artificial Intelligence" with bright, shiny illustrations of robots, screens, and other representations of the digital.

A bright yellow coloured children's book titled "ABC Artificial Intelligence" with bright, shiny illustrations of robots, screens, and other representations of the digital.

Found in the children's section of a book store, alongside ABCs of nature, animals, action words,etc.

Why, just why!

24.12.2025 03:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
When Africa’s internet breaks, this ship answers the call Undersea internet cables are critical in today’s hyperconnected world. The crew of the LΓ©on ThΓ©venin maintains one stretch of this global network.

β€œWe call it the β€˜Love Boat,’” he said. β€œWe spend more time with each other than with our own families, so we have formed strong bonds. We look out for one another.”

A glimpse of life on Africa’s only undersea internet cable repair ship
https://restofworld.org/2025/africa-internet-cable-repair-ship

21.12.2025 16:01 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2
Preview
King’s-RamΓ³n Areces Foundation PhD Scholarship Programme (K-FRA) King’s-RamΓ³n Areces Foundation PhD Scholarships support candidates with Spanish nationality for full-time doctoral study in Digital Humanities at King's College London.

πŸŽ“πŸ‡ͺπŸ‡Έ Fully funded PhD in Digital Humanities at King’s College London
The King’s–RamΓ³n Areces Foundation scholarship supports Spanish nationals with full tuition, an annual stipend (Β£22,780), and research funding.
πŸ“… Deadline: 13 February 2026
#PhDFunding #DigitalHumanities
www.kcl.ac.uk/study-legacy...

18.12.2025 12:04 β€” πŸ‘ 14    πŸ” 8    πŸ’¬ 0    πŸ“Œ 3

It was also exciting to learn about the variety in STS research in India, from studies of electrification, digital payments, sanitation to water conflicts, engineering cultures! And there was a real commitment throughout the conf to support students & ECRs to develop our work and build community!

18.12.2025 09:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Returning from the most fantastic @stsindianetwork.bsky.social conference this week! An engaging set of panels and discussions that showed the rich histories of STS in India, that I'm now eager to dig deeper into! It was equally refreshing to be in discussions that barely mentioned any AI.

18.12.2025 09:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
How a β€˜bleak’ job market is pushing white-collar workers to train AI that could one day replace them As tech companies ramp up artificial intelligence models, human AI trainers are in hot demand β€” but not for long. Read more

Data work companies hire β€œwhite-collar professionals rocked by mass layoffs & hiring freezes, forcing them to make the same cryptic tradeoff: take a volatile job that will temporarily help pay bills to train the next gen of AI that could one day replace them.” financialpost.com/technology/b...

06.12.2025 14:10 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0
Leverhulme Centre for Algorithmic Life Fellows - Assistant Professor (Research) G7 - G8 Click the link provided to see the complete job description.

Excited to announce @leverhulmecal.bsky.social posts - we are looking for 7 interdisciplinary fellows to join our Leverhulme Centre for Algorithmic Life, closing date 30 January 2026 (1/3) durham.taleo.net/careersectio...

02.12.2025 16:58 β€” πŸ‘ 84    πŸ” 55    πŸ’¬ 2    πŸ“Œ 2
A necessary analysis of polycrisis and its consequences must account for the industries, technologies, and policies that govern risks-and, crucially, how they engage in socio-technical processes of defining, predicting, and valuing risks. By synthesising the following objectives into a cohesive research program, this Future Fellowship aims to create essential empirical knowledge about urgent problems of insurability, which will inform new policy solutions and critical theories, all for the purpose of understanding and tackling the complex polycrisis at the nexus of insurance markets, risk technologies, and climate governance regimes.
β€’ Objective 1: Examine the design and construction of climate risk models by insurance industry and risk analytics firms to identify the assumptions, trade-offs, values, and goals built into these systems.
β€’ Objective 2: Engage directly with governments and communities at the frontlines of risk vulnerability to understand how abstract computational models and simulations connect to lived experiences and ground truth.
β€’ Objective 3: Develop new policies and frameworks for climate risk justice, which are informed by original empirical work and contribute to greater equity and security in risk governance.
β€’ Objective 4: Generate critical theoretical contributions that guide analytical and applied work about the techno-politics of risk governance in an age of global polycrisis.
This fellowship advances our understanding of climate risk governance across three Workstreams. Workstream 1 will conduct ethnography at insurance industry conferences and with companies building climate risk models.
Workstream 2 will engage in research with governments and communities who are managing the risks and realities of uninsurability from natural disasters. Workstream 3 will collaborate with a climate policy think tank to develop innovative policies grounded in original empirical research and conceptual analysis-that advance climate justice.

A necessary analysis of polycrisis and its consequences must account for the industries, technologies, and policies that govern risks-and, crucially, how they engage in socio-technical processes of defining, predicting, and valuing risks. By synthesising the following objectives into a cohesive research program, this Future Fellowship aims to create essential empirical knowledge about urgent problems of insurability, which will inform new policy solutions and critical theories, all for the purpose of understanding and tackling the complex polycrisis at the nexus of insurance markets, risk technologies, and climate governance regimes. β€’ Objective 1: Examine the design and construction of climate risk models by insurance industry and risk analytics firms to identify the assumptions, trade-offs, values, and goals built into these systems. β€’ Objective 2: Engage directly with governments and communities at the frontlines of risk vulnerability to understand how abstract computational models and simulations connect to lived experiences and ground truth. β€’ Objective 3: Develop new policies and frameworks for climate risk justice, which are informed by original empirical work and contribute to greater equity and security in risk governance. β€’ Objective 4: Generate critical theoretical contributions that guide analytical and applied work about the techno-politics of risk governance in an age of global polycrisis. This fellowship advances our understanding of climate risk governance across three Workstreams. Workstream 1 will conduct ethnography at insurance industry conferences and with companies building climate risk models. Workstream 2 will engage in research with governments and communities who are managing the risks and realities of uninsurability from natural disasters. Workstream 3 will collaborate with a climate policy think tank to develop innovative policies grounded in original empirical research and conceptual analysis-that advance climate justice.

Call for PhD Student!

I'm recruiting a PhD student to work with me on a project related to my Future Fellowship at Monash University on the political economy of insurance, risk technology, climate governance, and the crisis of uninsurability. Here's the overview of my fellowship.

25.11.2025 22:44 β€” πŸ‘ 54    πŸ” 37    πŸ’¬ 2    πŸ“Œ 4

If you are someone β€” citizen, scholars, journalist, activist, etc β€” fighting against AI data center development anywhere in Canada, can you ping me? Would like to coordinate something…

27.11.2025 19:05 β€” πŸ‘ 415    πŸ” 156    πŸ’¬ 4    πŸ“Œ 5
Preview
Data-Driven Healthcare Empirical encounters with computational 'intelligence'

✨exploring empirical encounters with computational 'intelligence' - an in-person event with Edinburgh’s DARE project researchers @nicolasugden.bsky.social, Abby King and Max Perry, looking at how data is being used in UK healthcare and biomedical research.

⬇️ sign up below!

shorturl.at/Yj91W

18.11.2025 15:21 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 3
Preview
Research Fellow, Ethnography and Sociology of the Artificial Cryosphere - Canberra, ACT, Australian Capital Territory, Australia job with AUSTRALIAN NATIONAL UNIVERSITY (ANU) | 403946 Seeking a researcher with experience in ethnographic or qualitative social science fieldwork for the ERC project "Cultur...

while I'm at it, also sharing this 5 year postdoc at ANU (Australia) mostly because I find the project - the cultures of the cryosphere - fascinating & a really #histSTM coded theme...
www.timeshighereducation.com/unijobs/list...

26.11.2025 11:50 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

πŸ“£ I am hiring a postdoc! aial.ie/hiring/postd...

applications from suitable candidates that are passionate about investigating the use of genAI in public service operations with the aim of keeping governments transparent and accountable are welcome

pls share with your networks

30.10.2025 19:51 β€” πŸ‘ 147    πŸ” 162    πŸ’¬ 3    πŸ“Œ 9
Preview
Meta is about to start grading workers on their AI skills Meta employee performance will be assessed by AI-driven impact starting in 2026, as the company prioritizes AI adoption across its workforce.

www.businessinsider.com/meta-ai-empl...

16.11.2025 23:29 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Want to add your voice to UK discussions around responsible AI? We @braiduk.bsky.social are looking for people interested in joining our stakeholder forum.

13.11.2025 07:11 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Orange graphic with the photo of the front cover and the brief description in the post's text

Orange graphic with the photo of the front cover and the brief description in the post's text

orange graphic with the front cover of the book and the discount code for 25% off: digitalworkers

orange graphic with the front cover of the book and the discount code for 25% off: digitalworkers

NOTES TOWARD A DIGITAL WORKERS' INQUIRY by The Capacitor Collective delivers first-hand accounts from the tech sector’s burgeoning labor movement.

Preorders save a couple bucks and get a free zine on the history of workers' inquiries! Checkout with coupon code DIGITALWORKERS: buff.ly/9Li5A3U

03.10.2025 18:36 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 1    πŸ“Œ 2
Preview
Gender and the Machine: Trans Tech, Emotion and AI Join us for this afternoon of events discussing gender, marginalisation and emotion in AI and digital technologies

Really looking forward to this upcoming @aies-edinburgh.bsky.social event, where the great @naz-andalibi.bsky.social and @haimson.bsky.social will be talking about emotion, gender, marginalisation and AI 🀩

www.eventbrite.co.uk/e/gender-and...

07.11.2025 10:37 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0

@srravya is following 20 prominent accounts