a. piece. of. software. is. not. the. type. of. entity. that. can. gain. consciousness.
06.03.2026 15:44 — 👍 161 🔁 47 💬 9 📌 1@ambrosia-engine.bsky.social
Writer, author, and researcher. PhD student in Informatics: AIAI: Automated Reasoning, Agents, Data Intensive Research, Knowledge Management. No, I don't know why the title is so long. Also Affiliate of the Center for Technomoral Futures. Views are my own.
a. piece. of. software. is. not. the. type. of. entity. that. can. gain. consciousness.
06.03.2026 15:44 — 👍 161 🔁 47 💬 9 📌 1MagicSchool uses a multi-model approach and pairs each tool with the model that performs best for its task. Today, we use multiple providers, including OpenAI GPT, Anthropic Claude, and Google Gemini, among others. We test models continually to ensure quality, safety, and reliability.
Also joke is on them because MagicSchool uses ChatGPT.
06.03.2026 00:11 — 👍 6 🔁 3 💬 0 📌 0I agree with most of this, also thank you cause it was insightful, I just wouldn't discount the minor actions currently not policed. It is not a crime to delete one's Instagram account (yet), but this has the tangible impact of reducing the amount of data Instagram can extract from you.
06.03.2026 18:12 — 👍 1 🔁 0 💬 1 📌 0I mean obviously we still need to get policymakers to hold these companies accountable and make meaningful interventions to prevent these harmful designs across every layer of the digital stack. Hard things are worth doing though.
06.03.2026 15:09 — 👍 0 🔁 0 💬 0 📌 0I think the best way to counteract this is by developing the political imagination of various individual & collective capacities of this power to act from our positions within our communities and institutions, and the tangible impacts (however minimal) that taking those actions can actually achieve.
06.03.2026 15:09 — 👍 0 🔁 0 💬 1 📌 01. We use the expression power to act rather than the more common philosophical term agency. From a Foucauldian perspective (Foucault 1982), power exists only in its exercise and is relationally constituted—it is not a latent capacity that a subject possesses but a situationally embedded possibility (Mühlhoff 2018; Vogelmann 2017; Saar 2007). The notion of power to act (Handlungsmacht) thus enables a graduated and relational description of situations, even where possibilities for action are highly constrained. Unlike agency, it does not imply an individualized inner capacity existing independently of its enactment. By contrast, when we speak of self-efficacy, we refer to the subjective perception of effectiveness—that is, the felt (in)ability to shape outcomes. Yet such perceptions are themselves effects of sociotechnical situatedness. Hence, power to act captures the relational and structural conditions of action, while self-efficacy describes their subjective experiential dimension.
I think they address they with how they use "power to act" rather than agency, to describe these contradictions. We still have power to act, even if those potentialities are highly constrained, but our inability to wield that power further constrains it.
06.03.2026 15:09 — 👍 1 🔁 0 💬 2 📌 0
Love this thread. You both might like this paper:
onlinelibrary.wiley.com/doi/10.1002/...
I'm wondering how educators can best challenge the feelings of helplessness these sociotechnical contradictions engender.
Crucially, in our analysis, AI resignation is not only an empirical finding but also the effect of a strategy of power: by engaging in AI resignation, subjects unwittingly participate in the wieldings of the power apparatuses of digital capitalism, because these apparatuses tend to benefit from subjects' resignation. This form of power thus operates by hollowing out the very basis of resistance—self-efficacy—so that experiences of participation, creativity, or deviation appear increasingly empty or structurally impossible.
This is so good, I wanna hang it on my wall.
06.03.2026 10:43 — 👍 3 🔁 1 💬 0 📌 0"This article develops the concept of ‘AI resignation’ to capture how young people encounter AI not only as a helpful or flawed tool, but as an overpowering and seemingly inevitable force that can foreclose their sense of political and personal power to act in relation to the future."
06.03.2026 08:54 — 👍 3 🔁 0 💬 0 📌 0because the rest are cowards
06.03.2026 07:59 — 👍 153 🔁 28 💬 4 📌 1
This explains a lot particularly around why TikTok's data centre development has been so aggressive and impatient
restofworld.org/2025/brazil-...
www.theguardian.com/technology/2...
The attacks on Iran from the US and Israel further confirm that they’re rogue states operating outside international law. They need to be reined in, but we also need to reduce US leverage over us—which includes getting off US tech and developing alternatives.
The latest Disconnect recap is out now:
My latest piece for Science examines the global AI value chain, how critical minerals are reshaping geopolitical competition across Global Majority countries, and why oversight of these resources must be embedded into AI governance.
www.science.org/doi/10.1126/...
a map of openai's influence on other media companies
Amazing site here via @timnitgebru.bsky.social - a map of big tech influence on media and media companies.
imo this goes a decent way to explaining why coverage of AI specifically has been so shockingly bad recently. Very useful resource!!
nananwachukwu.github.io/media-captur...
Woodrow Hartzog & Neil Richards: "The Legislature should ignore the high-priced lobbyists and pass a law that actually protects us from data-hungry business practices that benefit no one but big tech." @hartzog.bsky.social www.wbur.org/cognoscenti/...
04.03.2026 22:33 — 👍 19 🔁 8 💬 0 📌 0"In attempting to overcome dependency through purely technical means, they inadvertently created new forms of subordination."
05.03.2026 15:13 — 👍 0 🔁 0 💬 0 📌 0
“Most of what a writer experiences is failure. Developing a voice takes years. The point is not to make it out of the woods quickly or unscathed. Getting lost is not the rough part. It’s the whole thing.”
—Charles Yu
@theatlantic.com
@authorsguild.org @penamerica.bsky.social @wgawest.bsky.social
“I used a fake ID of Stalin and it got accepted”
More countries are pushing social media platforms to conduct age verifications, but the methods are far from foolproof
https://restofworld.org/2026/social-media-age-verification-tools/?utm_campaign=row-social&utm_source=bluesky
How many of them are in the Epstein files?
05.03.2026 13:05 — 👍 0 🔁 0 💬 0 📌 0Nine, mostly white dudes on stage in a manel.
Everyone wants to sign letters The Future of Life Institute puts out every few years, it seems.
Take a look at this manel which happened around their first letter 🙄
The billionaires and eugenicists on this manel are the actual existential risks to humanity we should worry about.
Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!
Remember Hans Eysenck? www.theguardian.com/science/2019...
> This research programme has led to one of the worst scientific scandals of all time
1/n
We went from making up reasons like "weapons of mass destruction" to you know what, fuck it we don't need to even lie, the president "had a feeling" that they pose a threat so we just bomb them so we can. A country doing this can't expect peace inside.
05.03.2026 07:20 — 👍 2212 🔁 482 💬 27 📌 20At this point is there a tracker for chatbot related deaths?
05.03.2026 10:01 — 👍 1 🔁 0 💬 1 📌 0Yeah, I'm trying to think about some of the big problems in science right now and be more thoughtful about how I work and how I contextualize my work. It's also really important to me we all critically examine the way academia functions. But that's long term, science should be slow.
05.03.2026 04:30 — 👍 5 🔁 1 💬 0 📌 0"Cognition is a collective activity." Wow, I love this line, I'm less on the philosophy side of things, but love reading it, and especially spend a lot of time thinking about this (recently wrote a piece on this actually), so I'm excited to add Fleck to my reading list, thank you.
05.03.2026 04:07 — 👍 1 🔁 1 💬 1 📌 0
Yeah that's the shit
ChatGPT uninstalls surged by 295% after DoD deal | TechCrunch
techcrunch.com/2026/03/02/c...
Thank you, these all seem explicitly useful for my research! I've encountered similar findings already on the persistence of false beliefs and cognitive dissonance, so I'm keen to see what these add, I'm very interested in the idea of epistemic capture as you mentioned.
04.03.2026 19:10 — 👍 2 🔁 0 💬 1 📌 0
How many of us read it and didn't believe for science-based reasons vs not believing it because our social context told us not to?
We tend frame it as if we have one rational side and one set of idiots, but...attribution bias. Context is more complex to tweak. We're entitled to expect integrity!
Wait, do you have any paper recommendations on this?
04.03.2026 18:44 — 👍 1 🔁 0 💬 3 📌 0Slide on Misinformation: Misinformation: the false dichotomy ➢ Most discussion of misinformation is overly simplistic: “With us or against us!” ➢ “Things are OK, back to normal!” vs “Things are not OK (because microchips/5G/etc)” ➢ Often led by institutional voices ➢ Frames misinformed beliefs as “human error” (and claims of stupidity, malice, etc) ➢ Proposed solutions generally involve shouting at (big budgets for PR campaigns) and/or punishing (social media bans) people until they do what they are told ➢ That’s about obedience, not understanding ➢ It also doesn’t work
Slide: Preamble: “human error” is not a cause ➢ One of the most common findings of an incident investigation, across fields, is “human error” ➢ That answers “Who is to blame?” ➢ But humans commit errors – that’s unavoidable! ➢ Preventing failure is a systems problem. We have to ask: “Why does the system allow a predictable event (human error) to lead to a significant failure?”
Slide: Preamble: “human error” is not a cause ➢ This does not mean there is not a place for accountability, but we have an obligation to learn from failure ➢ When the same error is widely repeated, we also have to ask: “Why is the system creating this error?” ➢ “Human error” is at most a component of a bigger systems failure, and often an excuse to stop thinking
Love these three slides especially, can't wait to read this now:
direct.mit.edu/books/oa-mon...
Thank you.