Does everyone need to do a PhD? No. Do we need PhDs for a prospering society? Absolutely. High-risk, high-effort, and low-monetary-reward breakthroughs require diligent workers. America’s long-term prosperity is more dependent on this expertise than most imagine.
7/ What's next? X has 60 days to fix the checkmark issue and 90 days for an action plan on ads/data or face penalty payments.
In December Elon Musk already called to "abolish the EU". And this decision does not even include more recent concerns about Grok producing sexual images of minors.
5/ The Evidence: The Commission cites research showing that users can't tell the difference between paid and vetted accounts. A key paper used in this debate is "Account Verification on Social Media: User Perceptions and Paid Enrollment" (arXiv:2304.14939). doi.org/10.48550/arX...
4/ Shutting Out Researchers: The law (Art. 40) says researchers must have access to data to study systemic risks. The EC says X is obstructing this by killing free API access and charging "prohibitive" fees—blocking the public's ability to see how the platform really works.
3/ Dark Patterns in Ad Repository: Transparency isn't just about having a list; it has to be usable. The EC found X’s ad repository is "labyrinthine" and "unreliable," with artificial delays and missing info on who is actually paying for influence campaigns.
2/ "Blue Check" Deception: The EC argues that selling "verification" without actual identity checks is a deceptive design practice. By using a symbol traditionally linked to authenticity, X misleads users into trusting accounts that could be bots or scammers.
The EU’s 🇪🇺 First Major DSA Fine: Why X is in Trouble
1/ The European Commission just issued its first non-compliance fine under the Digital Services Act (DSA), hitting X with a €120 million penalty. The argument? X is failing its transparency obligations in three massive ways.
‼️ 𝐂𝐇𝐈𝐖𝐎𝐑𝐊 𝟐𝟎𝟐𝟔 Full Papers deadline coming soon!
📅 Submission Deadline: 𝐅𝐞𝐛𝐫𝐮𝐚𝐫𝐲 𝟐, 𝟐𝟎𝟐𝟔
🚨Decision notification: 𝐌𝐚𝐫𝐜𝐡 𝟑𝟎, 𝟐𝟎𝟐𝟔
Check out our 𝐂𝐚𝐥𝐥 𝐟𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬: chiwork.org/26/call-for-...
📖 Accepted papers will be published Open Access in 𝐀𝐂𝐌 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐋𝐢𝐛𝐫𝐚𝐫𝐲 dl.acm.org/conference/c...
In 2023 we showed in a PNAS article that AI can very effectively adapt to humans' folk theories about what is AI writing and be "more human than human" @mjakesch.bsky.social
arstechnica.com/ai/2026/01/n...
I am a Hybrid Chair for CHIwork 2026 (chiwork.org/26/), and we are still seeking more participation in the Programming Committee tinyurl.com/hm9w3xxn, as we anticipate a higher number of submissions. Thank you
#CHIWORK2026 #HCI #FutureOfWork
It’s not just that they have malpractice to hide. Keeping data away from regulators is a way for them to keep competitors in the dark too. See my recent CSCW paper where we interviewed AV developers…
Excellent case study just out from academic colleagues @johannagunawan.bsky.social et al. , on the types of HCI evidence accepted in policymaking against Dark Patterns (@deceptive.design). policyreview.info/articles/ana...
Oh, and I had specific participant pool requirements on prolific side, and screening questions as well as captcha on the qualtrics side…
It will get scary with multimodal AI…
We had an open text field as a mandatory field for responses. Respondents were presented with text and image stimuli. Ai individuals clearly failed to respond to the images and instead provided long and hallucinated explanations based on the limited text stimuli available.
Overall, I reported around 15 AI participants. However, I excluded more participants based on our relatively stringent “expert” exclusion criteria. I need to look into the exact numbers we re-crecruited twice. We initially targeted 150 participants, 141 were included in the final set.
For a study Helen and I ran, I manually reviewed all 150 participants after finding AI responses. Prolific granted me 30 extra participants after reporting AI responses...
Why don’t autonomous vehicle companies share crash data — and how can they start? @sandhaus.bsky.social
Cornell Tech researchers are tackling one of the biggest challenges in autonomous vehicle safety: the lack of shared crash and safety data. Learn more: https://bit.ly/3WUUXnu
I also have a short blog post here that narrates the story. medium.com/acm-cscw/the...
It is open access available here: dl.acm.org/doi/10.1145/..., and you can find my slides here: www.figma.com/deck/5rXbpHv...
Welcoming all new @acm-cscw.bsky.social followers 🥰!
I presented my paper "My Precious Crash Data: Barriers and Opportunities in Encouraging Autonomous Driving Companies to Share Safety-Critical Data" with @angelhwang.bsky.social , @fabulousqian.bsky.social and @wendyju.bsky.social already online!
Interestingly sims free play is indeed banned in Saudi Arabia likely for allowing same sex interactions: www.newsweek.com/sims-freepla...
Join us for NYC Privacy Day 2025 at Cornell Tech, hosted by DLI @nissenbaum.bsky.social and SETS @mantzarlis.com.
We have a great selection of speakers and alongside
talks, we’ll feature student posters + demos.
🔗 Details, registration, and poster submission: dli.tech.cornell.edu/nyc-privacy-...
Due to travel restrictions, I cannot attend DIS in Madeira, Portugal. 🇵🇹🏝️
I recorded my presentation on how Technology Design Students use GenAI in class projects, accelerating design iteration but causing negative sentiment about learning and reflection skills.
supercut.ai/share/cornel...
Thanks for citing our work, and leaning into the implications we unpack in the discussion! arxiv.org/abs/2505.07085
Interesting, didn’t know this. But apparently, even though Toto licensed it from him in the 60s, major iterations on it were needed, and only in the 80s that design really took off. I think they could’ve mentioned more details like that. www.indooroutdoorguy.ca/bidets-bidet...
Washign+drying = Japanese Style Bidet ~ Washlet en.wikipedia.org/wiki/Washlet is japanese
Some students hope to keep working on their labeler and build it out beyond the current prototype stage. This includes the group behind @bsky-sci-verify.bsky.social. an effort to make scientists self-ID through their ORCID accounts and provide context on their scholarly effort. Check it out!
"She was not ready to rule on the argument that the chatbot’s messages are protected speech “at this stage,” adding that the defendants had failed to articulate “why words strung together” by an AI system should be considered speech."
www.washingtonpost.com/nation/2025/...
Came across this arxiv.org/abs/2505.12540 @cornelltech.bsky.social paper from @rishi-jha.bsky.social, et al. showing the possibility to „translate unknown embeddings into a different space while preserving their geometry“ on TikTok vm.tiktok.com/ZNdh5H7Ro/