Miranda Bogen's Avatar

Miranda Bogen

@mbogen.bsky.social

Director of the AI Governance Lab @cendemtech.bsky.social / responsible AI + policy

315 Followers  |  187 Following  |  1 Posts  |  Joined: 18.11.2024  |  1.405

Latest posts by mbogen.bsky.social on Bluesky

Post image Post image Post image Post image

AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.

Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:

22.07.2025 00:49 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Preview
Personalized AI is rerunning the worst part of social media's playbook The incentives, risks, and complications of AI that knows you

AI companies are starting to promise personalized assistants that β€œknow you.” We’ve seen this playbook before β€” it didn’t end well.

In a guest post for @hlntnr.bsky.social’s Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social media’s mistakes

21.07.2025 18:32 β€” πŸ‘ 14    πŸ” 5    πŸ’¬ 0    πŸ“Œ 3
Preview
It’s (Getting) Personal: How Advanced AI Systems Are Personalized This brief was co-authored by Princess Sampson. Generative artificial intelligence has reshaped the landscape of consumer technology and injected new dimensions into familiar technical tools. Search e...

Personalization is political. Very excited to share a piece I co-authored with @mbogen.bsky.social as a Google Public Policy Fellow @cendemtech.bsky.social!

cdt.org/insights/its...

05.05.2025 16:51 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Preview
OpenAI slashes AI model safety testing time Testers have raised concerns that its technology is being rushed out without sufficient safeguards

From CDT’s @mbogen.bsky.social: β€œAs #AI companies are racing to put out increasingly advanced systems, they also seem to be cutting more and more corners on safety, which doesn’t add up.” www.ft.com/content/8...

11.04.2025 18:29 β€” πŸ‘ 22    πŸ” 12    πŸ’¬ 1    πŸ“Œ 0
Preview
Adopting More Holistic Approaches to Assess the Impacts of AI Systems by Evani Radiya-Dixit, CDT Summer Fellow As artificial intelligence (AI) continues to advance and gain widespread adoption, the topic of how to hold developers and deployers accountable for the AI systems they implement remains pivotal. Assessments of the risks and impacts of AI systems tend to evaluate a system’s outcomes or performance through methods like […]

To truly understand AI’s risks & impacts, we need sociotechnical frameworks that connect the technical with the societal. Holistic assessments can guide responsible AI deployment & safeguard safety and rights.

πŸ“– Read more: cdt.org/insights/ado...

16.01.2025 17:47 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Hypothesis Testing for AI Audits Introduction AI systems are used in a range of settings, from low-stakes scenarios like recommending movies based on a user’s viewing history to high-stakes areas such as employment, healthcare, finance, and autonomous vehicles. These systems can offer a variety of benefits, but they do not always behave as intended. For instance, ChatGPT has demonstrated bias […]

CDT’s Amy Winecoff + @mbogen.bsky.social new explainer dives into the fundamentals of hypothesis testing, how auditors can apply it to AI systems, & where it might fall short. Using simulations, we show its role in detecting bias in a hypothetical hiring algorithm. cdt.org/insights/hyp...

16.01.2025 19:23 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Graphic for CDT AI Gov Lab's report, "Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems." Illustration of a collection of AI "tools" and "toolbox" – a hammer and red toolbox – and a stack of checklists with a pencil.

Graphic for CDT AI Gov Lab's report, "Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems." Illustration of a collection of AI "tools" and "toolbox" – a hammer and red toolbox – and a stack of checklists with a pencil.

NEW REPORT: CDT AI Governance Lab’s’s Assessing AI reportAudits looks at the rise of complex automated systems which demand a robust ecosystem for managing risks and ensuring accountability. cdt.org/insights/ass... cc: @mbogen.bsky.social

16.01.2025 17:37 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Upturn Seeks a Research Associate This position is ideal for someone who is excited about sharp, interdisciplinary research on a range of topics related to technology, policy, and justice.

@upturn.org is hiring for a research associate! Excellent opportunity to work with some fantastic folks! www.upturn.org/join/researc...

17.12.2024 13:13 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image

howdy!

the Georgetown Law Journal has published "Less Discriminatory Algorithms." it's been very fun to work on this w/ Emily Black, Pauline Kim, Solon Barocas, and Ming Hsu.

i hope you give it a read β€” the article is just the beginning of this line of work.

www.law.georgetown.edu/georgetown-l...

18.11.2024 16:40 β€” πŸ‘ 50    πŸ” 15    πŸ’¬ 4    πŸ“Œ 4

@mbogen is following 20 prominent accounts