Zander Arnao's Avatar

Zander Arnao

@zanderarnao.bsky.social

arkansas raised, commanders fan, working on tech and competition policy at the knight-georgetown institute

932 Followers  |  450 Following  |  425 Posts  |  Joined: 21.07.2023
Posts Following

Posts by Zander Arnao (@zanderarnao.bsky.social)

Preview
The Ten Most Popular ProMarket Articles From 2025 - ProMarket ProMarket published 257 articles in 2024. Revisit some of our most popular pieces.

Pleased to be in elite company here with @zanderarnao.bsky.social @andyshi.bsky.social @zingales.bsky.social and other stellar @promarket.bsky.social contributors!

www.promarket.org/2025/12/24/t...

24.12.2025 11:32 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐ŸšจWhy does access to public platform data matter? Join our webinar "Better Access: Data for the Common Good" (Jan 28, 2026, 11am-12pm ET) for a discussion on the Better Access framework, current regulatory shifts in the EU, UK + US, and what changes 2026 might hold. kgi.georgetown.edu/events/bette...

13.01.2026 19:17 โ€” ๐Ÿ‘ 2    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Model bill offers blueprint for states to regulate algorithmic design โ€“ Pluribus News

๐ŸงตOur new model bill for US lawmakers showing how online platforms can be tasked with creating better algorithmic feeds was featured in Pluribus News. Read more here: pluribusnews.com/news-and-eve... /1

08.12.2025 21:04 โ€” ๐Ÿ‘ 1    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Didn't get to ask my question! But that's a wrap on #TSRConf. Really enjoyed attending this year and live skeeting. Thanks to @stanfordcyber.bsky.social. Y'all killed it!!!

26.09.2025 22:56 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Meetali calls for more independent research on chatbots. For the Rain case (against OpenAI), TJLP benefited from more than 3200 pages of chatbot transcripts. This speaks to the power of data donations for fostering research

26.09.2025 22:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

"We live in an environment where companies have gone from moving fast and breaking things to moving fast and breaking people." -
@meetalijain.bsky.social

Powerful words from a leading advocate in the field ๐Ÿ”ฅ

26.09.2025 22:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

David calls for academia to be more realistic. Trust and safety teams in companies are small and charged with many responsibilities. Academics could have more impact by studying solutions that do more with less

26.09.2025 22:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Earlier this year - the judge in TJLP's case against Character AI ruled that it's unclear if the outputs of its chatbots are protected speech

26.09.2025 22:22 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Challenges according to Meetali:the First Amendment and establishing that AI is a product. She calls for a statutory framework designating AI as a product to establish a cause of action. Open legal questions also exist - does a chatbot's output imply intent? Is intent necessary for accountability?

26.09.2025 22:21 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Meetali on the law as a tool for promoting AI safety: while there's no dedicated state or federal chatbot laws, TJLP leverage product liability and consumer protection law (old and established doctrine) restricting unfair and deceptive practices

26.09.2025 22:21 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

David from Meta distinguishes between "good" and "bad" engagement, arguing that engagement isn't a monolith. I'm going to try to ask him what he means by good and bad engagement during the Q&A

26.09.2025 22:17 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Nate Fast: "Already by GPT-3, people preferred the interaction styles of chatbots over humans. It's a warning signal that people are attracted to these models. One of the concerns I have is artificial intimacy. It's easy to turn the dial up on this."

26.09.2025 22:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

"I do believe litigation is the more important lever we have to effectuate change...I hope that we can put pressure and open up space from the outside which [other actors in the ecosystem] can leverage to create change." --
@meetalijain.bsky.social

26.09.2025 22:11 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@meetalijain.bsky.social rejects the term "companion." "It suggests friendship. These chatbots are not friends."

26.09.2025 22:10 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

"I believe my role here is to issue an urgent warning call. We've never seen this kind of deluge of people who self-identify from being harmed by technology. These three cases are just the tip of the iceberg." - @meetalijain.bsky.social

26.09.2025 22:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@meetalijain.bsky.social starts her remarks with a story about Megan Garcia, whose son was sexually groomed by a chatbot.

Meetali's org the Tech Justice Law Project brought three cases against leading AI companies: CharacterAI, Google, and OpenAI.

26.09.2025 22:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Meta rep David Qorashi content that AI companions with empower users with great greater control over content and enable more transparency about content recommendations

26.09.2025 22:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I've been looking forward to this panel on AI companions with @meetalijain.bsky.social all day. This one is going to be spicy ๐Ÿ”ฅ #TSRConf

26.09.2025 22:02 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Based on this analysis - children are to three types of harms - explicit, implicit, and unintentional.

I'm a little unclear on the distinction between these three types of harms โ“

26.09.2025 21:16 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

According to her research, harmful content is often framed as entertainment - eg offensive comedy or crime dramas - which can be problematic when exposed to children

26.09.2025 21:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

And lastly: Haning Xue from the University of Utah on the role of algorithms in amplifying harmful content to children. Xue's study started with auditing the algorithm of Instagram, TikTok, and YouTube and the characteristics of content recommended to children

26.09.2025 21:11 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Ofcom researches choice architecture using online randomized control trials to test small changes to safety features (eg increasing the prominence of user safety tools) and behavioral audits to systematically map design practices and evaluate their potential impact on user behavior

26.09.2025 21:05 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Porter says design - the choice environment - matters because people are flawed decision-makers. Aspects of a platform can affect what consumers do. (Love the behavioral economics on display โค๏ธ)

26.09.2025 21:03 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Next up: Jonathan Porter from Ofcom (the British online safety regulator) on online safety! He starts with a spiel on the UK's Online Safety Act, which focuses in his telling on the backend of digital platform. Porter leads the UK's behavioral insights team and often examines platform design

26.09.2025 21:02 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

CDT's recommendations: employers should assess the usefulness and necessity of hiring technology; deployments should adhere to accessibility guidelines (eg WCAG); and human oversight should be incorporated into all stages of using the technology

26.09.2025 20:52 โ€” ๐Ÿ‘ 0    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Key findings: Workers with disability experienced a variety of barriers and reported feeling "extremely discriminated against."

"They're consciously using these tests knowing that people with disabilities ren't going to do well on them, and are going to get screened out."

26.09.2025 20:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Next up! The wonderful @arianaaboulafia.bsky.social at @cdt.org giving a talk on the exclusion of disabled workers by digitized hiring assessments.

Background: companies are incorporating hiring technologies into employment decisions, which poses risks of discrimination and poor accessibility

26.09.2025 20:48 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The key finding: An overall increase in intimacy expressed by models over time. However, not all evaluation methods show a clear increase in intimacy over time

26.09.2025 20:45 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The research team evaluated 59 LLMs across 9 conpanies from 2018 to 2025 ๐Ÿค– for the level of intimacy in expressed responses

26.09.2025 20:42 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Next upu Pearl Vishen from UC Davis talk: "Is Intimacy the New Attention? An Audit of Expressed Intimacy Across LLM Generations"

The key research questions: How does the level of expressed intimacy of LLMs evolve across generations? And has this gotten worse with subsequent generations of models?

26.09.2025 20:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0