My breakdown of Australia's new regulation for AI companion chatbots
www.henryfrasertechlaw.com/post/new-reg...
@henrylfraser.bsky.social
Law academic working on AI safety and responsibility in networked digital value chains
My breakdown of Australia's new regulation for AI companion chatbots
www.henryfrasertechlaw.com/post/new-reg...
π©· Good governance would require and incentivise AI/ADM providers and deployers to be *mensches* (compassionate, humane, decent, capable of imagination), and not to treat compliance as mere process and box-ticking. (Everyone)
03.07.2025 06:01 β π 0 π 0 π¬ 0 π 0π« AI and ADM providers and deployers should have a statutory duty to take reasonable care to prevent foreseeable harm (general, positive, vague, and thus context sensitive), in addition to specific process-based risk controls such as those in the EU AI Act. (Me, Kim).
03.07.2025 06:01 β π 0 π 0 π¬ 1 π 0βοΈ Reforming privacy law with a 'fair and reasonable' requirement for use of personal info is a critical step to reduce downstream risks of harm from AI and ADM in social services (Kim, Sam).
03.07.2025 06:01 β π 0 π 0 π¬ 1 π 0π Governance is more than rules. Implementation determines outcomes, and many orgs providing social services seriously struggle with implementation (Kath)
03.07.2025 06:01 β π 0 π 0 π¬ 1 π 0β° 'Human-In-The-Loop' is not a fix-all for automated-decision-making. One 'human' in an org is not going to push back on the systems, practices and assumptions that led the org to irresponsible/penny-pinching/unfair deployment of ADM in the first place (Jake)
03.07.2025 06:01 β π 1 π 0 π¬ 1 π 0Great panel yesterday on 'Governing automated decision-making for positive social services' with @kimweatherall.bsky.social , Samantha Floreani, Jake Goldenfein, and Kath Albury, organised and chaired by Christine Parker at @admscentre.org.au . The highlights of the conversation (in thread):
03.07.2025 06:01 β π 0 π 1 π¬ 1 π 0But it doesn't really matter what the original intent of @pro_ai_artist was. It seems clear that Al art's biggest utility right now is aspirationalism. The ability to quickly and cheaply generate a vision of the future for Trump supporters. And I've written before about how Al art is to modern fascism what futurism was to 20th-century fascism, but the Homeland Security X account posting a Thomas Kinkade painting - and a right-wing user getting mad at them and Al-generating a more βfuturistic" version - unlocked something for me. Unlike 20th-century futurism, Al art is, by definition, cobbled together out of previous art styles. An Al model cannot create anything new, only remix what it's ingested. Which is an apt metaphor for the Trump administration. An aspirational vision of the future that's completely defined by nostalgia. A complete cultural dead end.
Ryan Broderick has been arguing that AI slop is to Trumpism as futurism was to fascism, and he just added that AIβs remixing of its training data is βan aspirational vision for the future thatβs completely defined by nostalgia.β
(Just like fascism was!)
www.garbageday.email/p/trump-s-bi...
Don't let a company that doesn't care about you think your thoughts for you with a pseudo-brain they still haven't finished making. Please.
www.henryfrasertechlaw.com/post/the-ai-...
After marking too many papers riddled with AI generated nonsense, I worry about a future of *artificial general stupidity*. Will business, government and just about everyone succumb to the lure of cheap AI tools that aren't fit for purpose?
www.henryfrasertechlaw.com/post/ai-and-...