π¨ Now accepting commentary proposals!! π¨Thrilled to share that our paper --- "Resource-rational contractualism: A triple theory of moral cognition" --- was accepted for publication at Behavioral and Brain Sciences and is open for commentary!
12.02.2025 20:20 β π 42 π 16 π¬ 1 π 1
IASEAIβ25 Safe & Ethical AI Conference (breakout room 1)
My talk from IASEAI on "Dynamic Preferences in AI Alignment: A Deliberative Democracy Lens" is up here
video.oecd.org/embed-or-en-...
First pass sharing these ideas, more fleshed-out write-up to come~
09.02.2025 14:37 β π 6 π 0 π¬ 0 π 0
Tomorrow at #GROUP2025, @jina.bsky.social will present a paper on her interviews with people who voluntarily respond to misinformation online and their membership in different communities dedicated to supporting this: the Twitter/X Community Notes Discord, r/QAnonCasualties, and r/vaxxhappened.
15.01.2025 05:21 β π 22 π 6 π¬ 1 π 0
π’ Seeking PhD students for AI alignment research. Our lab investigates technical mechanisms for value learning, pre-training alignment, and regulatory frameworks. Come work with us if you want to bridge technical ML and legal/policy domains. Details in thread π§΅
02.12.2024 14:39 β π 18 π 6 π¬ 3 π 1
New preprint by Petter TΓΆrnberg et al. using LLMs to simulate the behavior of social media users under bridging-based ranking.
https://arxiv.org/abs/2310.05984
12.10.2023 15:03 β π 0 π 1 π¬ 0 π 0
Researcher at @ox.ac.uk (@summerfieldlab.bsky.social) & @ucberkeleyofficial.bsky.social, working on AI alignment & computational cognitive science. Author of The Alignment Problem, Algorithms to Live By (w. @cocoscilab.bsky.social), & The Most Human Human.
AI technical gov & risk management research. PhD student @MIT_CSAIL, fmr. UK AISI. I'm on the CS faculty job market! https://stephencasper.com/
Professor at Penn, Amazon Scholar at AWS. Interested in machine learning, uncertainty quantification, game theory, privacy, fairness, and most of the intersections therein
CS prof at Penn, Amazon Scholar in AWS. Interested in ML theory and related topics, as well as photography and Gilbert and Sullivan. Website: www.cis.upenn.edu/~mkearns
Computer science professor at Carnegie Mellon. Researcher in machine learning. Algorithmic foundations of responsible AI (e.g., privacy, uncertainty quantification), interactive learning (e.g., RLHF).
https://zstevenwu.com/
Research Director, Founding Faculty, Canada CIFAR AI Chair @VectorInst.
Full Prof @UofT - Statistics and Computer Sci. (x-appt) danroy.org
I study assumption-free prediction and decision making under uncertainty, with inference emerging from optimality.
I study algorithms/learning/data applied to democracy/markets/society. Asst. professor at Cornell Tech. https://gargnikhil.com/. Helping building personalized Bluesky research feed: https://bsky.app/profile/paper-feed.bsky.social/feed/preprintdigest
Professor of Computer Science, @TelAvivUni | @ACM SIGECOM Chair | Research areas: Econ&CS, Algorithmic Game Theory, Market Design
Cognitive scientist working at the intersection of moral cognition and AI safety. Currently: Google Deepmind. Soon: Assistant Prof at NYU Psychology. More at sites.google.com/site/sydneymlevine.
Princeton computer science prof. I write about the societal impact of AI, tech ethics, & social media platforms. https://www.cs.princeton.edu/~arvindn/
BOOK: AI Snake Oil. https://www.aisnakeoil.com/
VP and Distinguished Scientist at Microsoft Research NYC. AI evaluation and measurement, responsible AI, computational social science, machine learning. She/her.
One photo a day since January 2018: https://www.instagram.com/logisticaggression/
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT). June 23rd to June 26th, 2025, in Athens, Greece. #FAccT2025
https://facctconference.org/
AI safety at Anthropic, on leave from a faculty job at NYU.
Views not employers'.
I think you should join Giving What We Can.
cims.nyu.edu/~sbowman
Philosopher working on normative dimensions of computing and sociotechnical AI safety.
Lab: https://mintresearch.org
Self: https://sethlazar.org
Newsletter: https://philosophyofcomputing.substack.com
We're an academic community for Plurality research & technology | Cooperate across differences | DM us your news, research, jobs, grants & fellowships.
plurality.institute
Econ prof at Harvard. (Mechanism design, market design, behavioral theory.) www.shengwu.li
Researcher of online rumors & disinformation. Former basketball player. Prof at University of Washington, HCDE. Co-founder of the UW Center for an Informed Public. Personal account: Views may not reflect those of my employer. #RageAgainstTheBullshitMachine
Psychology, neuroscience, music, game design.
We are the Social Futures Lab at UW CSE! We are reimagining social and collaborative systems to empower people and improve society.
https://social.cs.washington.edu/
Assistant Prof of AI & Decision-Making @MIT EECS
I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL.
I work on value (mis)alignment in AI systems.
https://people.csail.mit.edu/dhm/