If AI companies truly cared about “healthy engagement,” they’d design for healthy disengagement. Most aren’t, write @mluria.bsky.social and Amy Winecoff from @cdt.org.
25.11.2025 17:51 — 👍 3 🔁 3 💬 0 📌 0@mluria.bsky.social
Human-centered researcher of emerging tech. Designer of alternative interactions. Research Fellow at @CenDemTech. Ph.D. in HCI from @CarnegieMellon
If AI companies truly cared about “healthy engagement,” they’d design for healthy disengagement. Most aren’t, write @mluria.bsky.social and Amy Winecoff from @cdt.org.
25.11.2025 17:51 — 👍 3 🔁 3 💬 0 📌 0It was especially fun to use one of my favorite design research methods in this project — and for the first time in a policy context. #Speeddating allows participants to review and reflect on many possible futures scenarios to explore and articulate their full and nuanced perspectives.
20.11.2025 20:04 — 👍 3 🔁 0 💬 0 📌 0Delighted to share my collaboration with @aliyabhatia.bsky.social on research that grounds online safety debates in what families actually say they need and value. We touched on four key topics: age verification, feed controls, screen-time features and parental access.
20.11.2025 19:58 — 👍 2 🔁 2 💬 1 📌 0I receive so many questions about what research in civil society is like -- Join our virtual panel to hear more about exactly that! @cdt.org @datasociety.bsky.social @aclu.org
With @thakurdhanaraj.bsky.social @alicetiara.bsky.social and @mkgerchick.bsky.social
To register:
cdt.org/event/advoca...
Guests @zevesanderson.com, CDT Research Fellow @mluria.bsky.social, and guest host
@aliyabhatia.bsky.social unpack what users really think about age checks, how they shape online behavior, and what’s at stake for balancing child safety with digital rights.
🚨 NEW BLOG led by CDT Intern @adinawa.bsky.social, with CDT’s Ruchika Joshi & @mluria.bsky.social, explores dark patterns in conversational AI — subtle design tricks in tools like ChatGPT, Replika & Character.AI that influence spending, attention, & data sharing:
16.09.2025 20:39 — 👍 12 🔁 7 💬 3 📌 09/9 As Savage mentioned in her testimony, UX researchers generally advocate for users — especially vulnerable users like children. To get there, it is crucial to ensure that they have freedom to ask hard questions, pursue answers with the appropriate methodology, and communicate findings clearly.
10.09.2025 14:57 — 👍 2 🔁 0 💬 0 📌 08/ In parallel, we need clear auditing and accountability processes within companies, as well as access to data to vetted independent researchers.
10.09.2025 14:57 — 👍 2 🔁 0 💬 1 📌 07/ That’s why it’s not enough to have research teams — it’s also about making sure research is conducted and reported at the highest standards. In this case, the researchers themselves seem to have held the highest standards; but everyone, all the way to top leadership, must be onboard too.
10.09.2025 14:57 — 👍 3 🔁 0 💬 1 📌 06/ Doing research inside a big tech company is already extremely difficult. Researchers face significant pressure from internal and external stakeholders, and the potential for conflicts of interest are an everyday reality. Still, this work is deeply necessary, and understaffed.
10.09.2025 14:57 — 👍 2 🔁 0 💬 1 📌 05/ This testimony matters, and these whistleblowers are courageous for coming forth. At the same time, such allegations shouldn’t undermine trust in UX research, which would be a devastating outcome — company researchers are among those working hardest to surface safety risks and push for change.
10.09.2025 14:57 — 👍 1 🔁 0 💬 1 📌 04/ The only thing worse than having no research on a critical safety-related research question, is having misleading research; A gap in knowledge can be acknowledged and addressed. But if the research is riddled with malpractice, the harm is harder to detect — and far more damaging.
10.09.2025 14:57 — 👍 3 🔁 0 💬 1 📌 03/ If true, this profoundly undermines research integrity. Excluding findings, deleting data that sheds light on people’s safety, misrepresenting findings — these would all directly violate the most fundamental research ethics code.
10.09.2025 14:57 — 👍 3 🔁 1 💬 1 📌 02/ The testimony focused on alarming internal interactions between research teams, leadership, and legal, that allegedly suppressed, altered, and misrepresented research and research findings to protect the company from liability and damage to reputation.
10.09.2025 14:57 — 👍 3 🔁 0 💬 1 📌 0It’s not every day that UX researcher whistleblowers testify before the Senate. Yesterday, two former Meta researchers, Jason Sattizahn and Cayce Savage, shared their concerns with safety research for Meta VR products, and more broadly within Meta. So how does UX research move on from here? 🧵
10.09.2025 14:57 — 👍 17 🔁 13 💬 1 📌 3With a wave of tragic headlines about AI-related deaths and lawsuits it's critical to reconsider the design choices that enable this -- as a first, stir away from intentionally humanlike chatbots.
29.08.2025 15:19 — 👍 1 🔁 0 💬 0 📌 0We used a Design Research approach (#speeddating) to test scenarios currently being proposed and debated in policy circles with teens and parents. Here is what we found on age verification approaches 👉
Full report on all topics, including screen-time features and algorithm controls coming soon.
CDT’s @mluria.bsky.social + @aliyabhatia.bsky.social's interviews with families reveal concerns about privacy, efficacy, & the need for transparent, user-centered approaches that support both agency & parental discretion. Read more:
28.08.2025 17:30 — 👍 1 🔁 1 💬 0 📌 1Instead of hearing Senate Judiciary Committee members paint a sensational picture about social media, read what over 20 researchers and experts say can actually move the needle on sustainable and equitable policymaking that benefits all young people w/ @mluria.bsky.social: cdt.org/insights/the...
19.02.2025 15:27 — 👍 2 🔁 2 💬 0 📌 0Check out CDT’s latest report, led by @mluria.bsky.social and also starring @arianaaboulafia.bsky.social and me, on disabled workers’ experiences with gamified hiring tests, AI scored video interviews, and other modern digitized employment assessments.
20.11.2024 23:15 — 👍 7 🔁 5 💬 1 📌 0