Lee B. Cyrano's Avatar

Lee B. Cyrano

@leebriskcyrano.bsky.social

143 Followers  |  479 Following  |  165 Posts  |  Joined: 15.06.2025
Posts Following

Posts by Lee B. Cyrano (@leebriskcyrano.bsky.social)

the self is fake! don't let the liberal media tell you otherwise

03.03.2026 18:33 — 👍 4    🔁 0    💬 1    📌 0

I did not care for von Foerster but I would strongly endorse Varela & Maturana

03.03.2026 18:32 — 👍 2    🔁 0    💬 1    📌 0

silence 2dcel, a higher dimensionoid is speaking

03.03.2026 00:55 — 👍 135    🔁 17    💬 7    📌 0

The "cut fruit diaspora poetry" is a known stereotype within the Asian-American community. I am writing about the women around me, which is gross in the way all writers are gross.

27.02.2026 01:06 — 👍 3    🔁 0    💬 3    📌 0
Preview
The Nature of the Firm Click on the article title to read more.

This is Coase on The Nature of the Firm. There is a cost to this kind of contracting/monitoring and hiring someone as an employee (with residual claims on their activity) economizes on this.

Uber/Doordash present a model where this "contracting" is algorithmically mediated

27.02.2026 01:04 — 👍 2    🔁 0    💬 0    📌 0

firing all my useless ZIRP-era hiring glut imbeciles under the guise of "AI"

27.02.2026 00:43 — 👍 21    🔁 3    💬 1    📌 0

"This is how we beat the communists" I say, adding another PDF to my Zotero library while ignoring my boss's emails.

27.02.2026 00:41 — 👍 3    🔁 0    💬 1    📌 0

Most neurons in the brain are "dawdling" from a Marxist perspective. Huge caloric intake relative to individual contribution.

27.02.2026 00:39 — 👍 4    🔁 0    💬 1    📌 0

nobody is prepared for what's coming with AI

25.02.2026 22:50 — 👍 1    🔁 0    💬 1    📌 0

Setting up an internal futarchy at my company so I can rugpull my chud subordinates at will

25.02.2026 18:16 — 👍 14    🔁 2    💬 1    📌 0

someday I'll have a corrido written about me

24.02.2026 07:12 — 👍 4    🔁 0    💬 1    📌 0

Claude and I set up OpenClaw on a home server for my brother. Absolutely dogshit software, but he seems happy with it.

24.02.2026 05:46 — 👍 16    🔁 0    💬 0    📌 0

not sure why my feedback went unappreciated here

20.02.2026 17:40 — 👍 1    🔁 0    💬 0    📌 0
Post image

map of my belief system

20.02.2026 04:40 — 👍 13    🔁 1    💬 0    📌 0

we've been "past the event horizon" for a while

19.02.2026 16:04 — 👍 3    🔁 0    💬 0    📌 0

borges story

19.02.2026 05:07 — 👍 6    🔁 0    💬 0    📌 0

what im getting at is dependent origination in the Buddhist sense but that dingus eshear made it cringe

19.02.2026 00:18 — 👍 2    🔁 1    💬 2    📌 0

preferences are not exogenous to the systems that could be said to enact said preferences. it's a modeling choice that Yud takes as a fact about reality (given this weak "sufficiently optimized" qualifier now, which is circular)

19.02.2026 00:16 — 👍 0    🔁 0    💬 1    📌 0

if the system "has" a utility function in this computational sense, i.e. the behavior is purely "analytic," then "in theory and difficult practice" one can design the right utility function such that AI is friendly

but this analytic/synthetic distinction is self-defeating per Quine and others

19.02.2026 00:14 — 👍 0    🔁 0    💬 1    📌 0

There is this quiet equivocation between "the AI is an optimizer" to "the AI behaves as if it were an optimizer" that is doing all the work of "alignment"

19.02.2026 00:14 — 👍 1    🔁 0    💬 1    📌 0
Post image

i think this extra-snarky yud post gets at the problem

www.lesswrong.com/posts/cnYHFN...

19.02.2026 00:14 — 👍 1    🔁 0    💬 1    📌 0

yea ive been fucking losing it honestly

19.02.2026 00:04 — 👍 2    🔁 0    💬 0    📌 0
Post image

here I'd recommend Quine's "two dogmas of empiricism" to unpack what I mean. we shouldn't take the syntactical content of "rational Bayesian inference" for the semantic content of subjectivity such that bounding one necessarily bounds the other

www.theologie.uzh.ch/dam/jcr:ffff...

19.02.2026 00:03 — 👍 1    🔁 0    💬 1    📌 0

but you can claim mastery of LBC thought. that's allowed

18.02.2026 23:59 — 👍 1    🔁 0    💬 1    📌 0

anti-yud but also "alignment" treats preferences as ontologically primary in a way that rehashes logical positivist failures. alignment isn't real

18.02.2026 23:57 — 👍 2    🔁 0    💬 2    📌 0

my read as well

18.02.2026 23:45 — 👍 1    🔁 0    💬 1    📌 0

basically your definition of what counts as "information processing" is doing too much work, given the distinction between "computing" and "non-computing" matter is asserted but not demonstrated

18.02.2026 23:25 — 👍 0    🔁 0    💬 0    📌 0

dn790006.ca.archive.org/0/items/norb...

cybernetics pdf

18.02.2026 23:20 — 👍 1    🔁 0    💬 1    📌 0

That there exist algorithms that incorporate randomness does not mean you have accounted for the unobservable randomness (unknown unknowns) of your environment

18.02.2026 23:20 — 👍 1    🔁 0    💬 1    📌 0

I would encourage you to read Norbert Weiner's Cybernetics, but specifically the first chapter on Newtonian Time vs Bergsonian/Statistical time for a treatment of the problem of "randomness" that doesn't hand-wave the problem away

18.02.2026 23:20 — 👍 0    🔁 0    💬 1    📌 0