I'll be attending! Excited to test the conversation-first setup
28.11.2024 03:37 — 👍 6 🔁 0 💬 0 📌 0
This is awesome! Would love to be added!
23.11.2024 16:41 — 👍 4 🔁 0 💬 1 📌 0
Stripes of various colors, corresponding to color words found in BlueSky posts.
See the colors of BlueSky, live!
www.bewitched.com/demo/rainbow...
This little visualization scans incoming posts and draws a stripe every time it finds a color word.
19.11.2024 01:22 — 👍 403 🔁 101 💬 33 📌 32
my Bluesky manifestation / HCI PhD @ CMU exploring the power of everyday people to resist harmful algorithmic systems
all the other whatever at uhleeeeeeeshuh.com
~also enjoys weaving, musicals, grammar, ice cream, libraries~
she/her
PhDing at CMU HCII @hcii.cmu.edu
I research AI harms & design best practices using psych/behavioral theory.
phd student @ cmu hcii | studies racially minortized groups & their tech experiences | tags: equitable AI, design | she/her
www.lisaegede.com
I make sure that OpenAI et al. aren't the only people who are able to study large scale AI systems.
Postdoc researcher @MicrosoftResearch, previously @TUDelft.
Interested in the intricacies of AI production and their social and political economical impacts; gap policies-practices (AI fairness, explainability, transparency, assessments)
PhD student @CMU HCII | Prev: IBM Research, Microsoft Research, Brave
I develop tools that assist AI practitioners in identifying, reasoning about, and mitigating privacy risks during the development of AI products.
https://hankhplee.com/
researching AI [evaluation, governance, accountability]
Searching for principles of neural representation | Neuro + AI @ enigmaproject.ai | Stanford | sophiasanborn.com
Assistant Professor at University of Aberdeen | Postdoc at UCL | PhD at University of Sheffield | mechanistic interpretability & multimodal LLMs | https://www.ruizhe.space
Researcher @Microsoft; PhD @Harvard; Incoming Assistant Professor @MIT (Fall 2026); Human-AI Interaction, Worker-Centric AI
zbucinca.github.io
Trust & Safety Research @ Google focused on Deception & Manipulation (Scams & AI based Persuasion)
Prev: Disinfo/Extremism Research
PhD Student at @gronlp.bsky.social 🐮, core dev @inseq.org. Interpretability ∩ HCI ∩ #NLProc.
gsarti.com
Asst Prof at Université de Montréal, Associate Member of Mila-Quebec AI Institute. PhD from Cornell InfoSci. Creator of ChainForge. Programming and culture, LLM evaluation tooling.
Postdoc at UW CSE | Vis, Soni, Quantum, Design, CFDC (caffeine-free diet coke) | Don't talk about CSS with me because I can't stop.
Generative AI, Interpretability, Visualization, Tooling/Infrastructure
Machine Learning Student @ KTH
Interested in all things bach, piano, outdoors, data, cooking.
PhD Candidate at Georgia Tech in Human-Centered Machine Learning. Working to make data science useful for actual scientists.
Visualization & data analysis research at the University of Washington. In a prior life was the Stanford Vis Group. https://idl.uw.edu
Co-director of Princeton HCI. Faculty at Princeton Computer Science. Board member at Crisis Text Line.
https://andresmh.com