Jason Murphy's Avatar

Jason Murphy

@jason-murphy.bsky.social

Interested in AI ethics, tools, and machine learning Sometimes seen in CODE Magazine doing silly stuff with AI.

72 Followers  |  45 Following  |  84 Posts  |  Joined: 07.06.2024  |  1.9539

Latest posts by jason-murphy.bsky.social on Bluesky

The Plush Planet: 1960s Space Age Adventure
YouTube video by Ai Fascinated The Plush Planet: 1960s Space Age Adventure

The AI Fascinated channel on YouTube keeps turning out bangers.

30.07.2025 17:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Agreed! Take the trash out. Compliment my beard. It's not hard.

29.07.2025 15:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Unitree G1 Humanoid Robot Boxing: All the WILDEST Highlights | What The Future
YouTube video by CNET Unitree G1 Humanoid Robot Boxing: All the WILDEST Highlights | What The Future

A technology that will reshape the world, but sure ... let's just make them fight. πŸ™„

29.07.2025 13:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
China lays out its AI vision in foil to Donald Trump’s β€˜America First’ plan Beijing plugs global co-operation and open-source models at first industry show since DeepSeek breakthrough

While the US tries to establish itself as the castle on the mountain, China puts forward a plan of global cooperation.

29.07.2025 13:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Introducing Opal: describe, create, and share your AI mini-apps- Google Developers Blog Discover Opal, a new experimental tool from Google Labs, that helps you compose prompts into dynamic, multi-step mini-apps using natural language.

As someone who has been publicly harassed for 'vibe coding,' I'm thrilled to see big players like Google embracing it.

Before long, it will evolve into users just asking the LLM to craft something that suits their immediate needs. Fully formed and ready for use.

28.07.2025 15:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm excited to dive into this. As a dabbler and neophyte, I'm grateful for anything that simplifies the API process.

28.07.2025 14:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Not my paper, of course, but highlights how it's often not a technical problem that gets in the way.

Employees will adhere to old methods and often adopt a 'shadow workflow' to sidestep new tools.

28.07.2025 13:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
(PDF) OVERCOMING BARRIERS TO ARTIFICIAL INTELLIGENCE ADOPTION PDF | The purpose of this study is to explore the barriers to the successful implementation of Artificial Intelligence (AI) in organizations, focusing... | Find, read and cite all the research you nee...

I recently spoke with a consultant attempting to utilize AI to streamline processes in the oilfield industry, at the ground level.

After a long back and forth, we realized that the resistance to adoption wasn't technical inadequacy but 'cultural inertia' in the organization.

28.07.2025 13:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So many exciting new nodes from n8n. Can't wait to see what I can gin up.

28.07.2025 12:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Absolutely. Well said. We are losing a propaganda war.

27.07.2025 17:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
(PDF) The Sisyphean Cycle of Technology Panics PDF | Widespread concerns about new technologiesβ€”whether they be novels, radios, or smartphonesβ€”are repeatedly found throughout history. Although tales... | Find, read and cite all the research you ne...

A great paper on technology panic by @orbenamy.bsky.social . Turns out the cycles are pretty predictable!

I wonder what stage we're at with AI ...

27.07.2025 17:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes! Wealth disparity, homelessness, etc, is all intentional. We can and know how to do better, we just choose not to.

(we = robber barons)

27.07.2025 17:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

AI isn't the threat. Unchecked, cancerous capitalism is.

27.07.2025 16:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Shame inΒ theΒ machine: afective accountability andΒ theΒ ethics ofΒ AI
RachelΒ McNealis1
Received: 14 January 2025 / Accepted: 26 June 2025
Β© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025
Abstract
The cultural weaponization of shame surrounding the use of artifcial intelligence (AI) tools like ChatGPT often redirects 
ethical scrutiny away from systemic concerns and toward individual users. Drawing on Sara Ahmed’s afect theory, this 
paper argues that cultural narratives of "AI shaming" function as moral displacement that redirects scrutiny away from the 
environmental costs, exploitative labor practices, and corporate monopolization defning contemporary AI development. 
The analysis examines how shame operates across academic and professional settings to create "efort anxiety" that demands 
both visible human labor and accelerated productivity. Current discourse treats AI use as a personal virtue problem and 
obscures the carbon-intensive data centers, underpaid content moderators, and proprietary knowledge systems that enable 
these technologies. Instead of eliminating shame, the paper proposes redirecting it toward collective accountability for 
AI’s systemic harms. Environmental degradation, algorithmic bias, and extractive infrastructures represent the true ethical 
frontier of artifcial intelligence. Policy frameworks, educational interventions, and governance structures ofer pathways 
for transforming shame from individual punishment into institutional reform. The stakes extend beyond AI itself: as emerg-
ing technologies reshape society, the patterns of moral responsibility established now will determine whether innovation 
serves collective fourishing or perpetuates existing inequalities. Shame can become a vehicle for institutional critique and 
systemic accountability if we redirect its focus from individual users to the powerful corporations, governance structures, 
and infrastructural s…

Shame inΒ theΒ machine: afective accountability andΒ theΒ ethics ofΒ AI RachelΒ McNealis1 Received: 14 January 2025 / Accepted: 26 June 2025 Β© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025 Abstract The cultural weaponization of shame surrounding the use of artifcial intelligence (AI) tools like ChatGPT often redirects ethical scrutiny away from systemic concerns and toward individual users. Drawing on Sara Ahmed’s afect theory, this paper argues that cultural narratives of "AI shaming" function as moral displacement that redirects scrutiny away from the environmental costs, exploitative labor practices, and corporate monopolization defning contemporary AI development. The analysis examines how shame operates across academic and professional settings to create "efort anxiety" that demands both visible human labor and accelerated productivity. Current discourse treats AI use as a personal virtue problem and obscures the carbon-intensive data centers, underpaid content moderators, and proprietary knowledge systems that enable these technologies. Instead of eliminating shame, the paper proposes redirecting it toward collective accountability for AI’s systemic harms. Environmental degradation, algorithmic bias, and extractive infrastructures represent the true ethical frontier of artifcial intelligence. Policy frameworks, educational interventions, and governance structures ofer pathways for transforming shame from individual punishment into institutional reform. The stakes extend beyond AI itself: as emerg- ing technologies reshape society, the patterns of moral responsibility established now will determine whether innovation serves collective fourishing or perpetuates existing inequalities. Shame can become a vehicle for institutional critique and systemic accountability if we redirect its focus from individual users to the powerful corporations, governance structures, and infrastructural s…

Seems that "AI shaming" is becoming a whole genre of study among AI boosters

link.springer.com/article/10.1...

22.07.2025 14:57 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 5    πŸ“Œ 3

I foresee a near future where most of my friends are artificial.

26.07.2025 23:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

At one time...

26.07.2025 22:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
a black and white photo of a little girl making a funny face . ALT: a black and white photo of a little girl making a funny face .

Has anyone lost friends because of their enthusiasm for AI? I'm approaching double digits.

26.07.2025 17:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Multiple drafts model - Scholarpedia

A fun thought exercise:

I've seen some folks discussing Dennett's Multiple Draft Model of consciousness and how it compares to LLMs. Both utilize iterative revisions and lack a 'central controller' deciding what the person/model 'believes'.

(It's not grounds for claims of consciousness, though.)

26.07.2025 13:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Preview
China’s Unitree Offers a Humanoid Robot for Under $6,000 Unitree Robotics is marketing one of the world’s first humanoid robots for under $6,000, drastically reducing the entry price for what’s expected to grow into a whole wave of versatile AI machines for...

The curse of early adoption be damned. We will definitely be buying a robot soon.

25.07.2025 15:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Oh, I agree with all of that. 100%. We need serious guardrails.

24.07.2025 17:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Still skeptical!

24.07.2025 17:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But since the big egg is actually the tumor of toxic, metastasized capitalism? Smash that thing on the sidewalk.

24.07.2025 17:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

That is unrelated to the question I posed.

24.07.2025 17:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I would take you up on that.

24.07.2025 17:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But agentic AI uses ChatGPT.

24.07.2025 17:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Is all of the anti-AI furor legitimate? Or is this part of a concerted effort to poison the well?

There are bad actors (governments/corporations/oligarchs) out there who would love for the hoi polloi to forsake the tech. Gotta protect that hegemony.

Cui bono?

23.07.2025 21:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Preview
George Lucas Thinks Artificial Intelligence in Filmmaking Is 'Inevitable' - IGN In a recent interview, George Lucas was asked about artificial intelligence in filmmaking, which he called "inevitable": "It's like saying, 'I don't believe these cars are gunna work. Let's just stick...

"But the thing of it is," Lucas goes on, "it's inevitable. I mean, it's like saying, 'I don't believe these cars are gunna work. Let's just stick with the horses. Let's stick with the horses.' And yeah, you can say that, but that isn't the way the world works."

08.07.2025 16:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Everyone tells me that when they took mushrooms, the best thing was to go outside around nature and be with someone who cares about them.

ChatGPT seems technically capable, but ... antithetical to that idea.

08.07.2025 12:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
People Are Using AI Chatbots to Guide Their Psychedelic Trips As psychedelic companies and therapy apps experiment with AI, people are already taking huge doses of drugs and using chatbots to process their trips.

ChatGPT as a trip nanny? I can see it.

07.07.2025 13:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
1 Hour of Retro-Futuristic Space Exploration
YouTube video by Ai Fascinated 1 Hour of Retro-Futuristic Space Exploration

One of my favorite uses so far of AI music and video:

06.07.2025 12:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@jason-murphy is following 20 prominent accounts