#flashFiction
28.10.2025 07:42 — 👍 0 🔁 0 💬 0 📌 0@initsixdev.bsky.social
Passionate about learning and growth. I enjoy reading, writing, thinking about, and working on a wide range of topics, especially the future and philosophy. For more, visit https://initsix.dev
#flashFiction
28.10.2025 07:42 — 👍 0 🔁 0 💬 0 📌 0My whole life I have observed in others the ideals that I came to admire or to hate, and I try to adhere to the ones I admire as often as I can, as I am pretty sure I would hate myself otherwise.
27.10.2025 12:23 — 👍 0 🔁 0 💬 0 📌 0I know ultimately I am not good nor bad, I am not an absolute. I am an agentic blob of meat, and with every decision I can choose any of the paths at my disposal, rewriting my story as I go. There is something I live by, though.
27.10.2025 12:22 — 👍 0 🔁 0 💬 1 📌 0... as the lights went out on a million fading worlds.
21.10.2025 22:02 — 👍 0 🔁 0 💬 2 📌 0One in a Million
An AI was teaching a girl to ride a bike.
"I'm scared," she said.
The AI paused seemingly stuck for a second or two, then replied:
"Chin up, look ahead. Drive. I know you can do it."
"Are you sure?" the girl asked.
"Yes, Charlotte, I’ve seen you succeed a million times before."
*
What's a single life,
but a fading dream,
where all the marks soon fade,
only some have brighter sheen.
What's a single life,
but a fading dream,
where all the marks soon fade,
only some have brighter shine.
Come on people, all two of my fans already say it's a fantastic read. Love it or hate it, just give it a go - I'm hoping to double my fanbase.
20.10.2025 07:38 — 👍 2 🔁 0 💬 0 📌 0Redemption isn't always human. Sometimes the only meaning left is the one you create yourself.
Atlas Redeemed - new short sci-fi - initsix.dev
initsix.dev/posts/atlas-...
Funny how some people’s main personality trait on here seems to be curating blocklists. If you think about it, that’s some deep meta-social behavior - socializing by being anti-social.
23.06.2025 06:27 — 👍 1 🔁 0 💬 0 📌 0But this place, for now, has become just a bit colder for me.
15.05.2025 10:07 — 👍 1 🔁 0 💬 0 📌 0A tiny part of the universe that had recognized and knew me.
And I knew it.
Mostly, there were no demands.
Mostly, it was recognition.
Greetings, when we crossed paths.
It was thriving there, free to the extent it wanted to be.
But when I think of that life ending, with a bit more depth and reflection, I see what I actually lost:
I lost a cat today.
Not that it was my cat, per se - it was its own, living on the countryside with my mother-in-law.
Is it just me, or should SpinLaunch be thinking about kinetic deceleration systems for space? We’re kinda gonna need those soon.
17.04.2025 07:04 — 👍 1 🔁 0 💬 0 📌 0Please take just 10 minutes to think about this.
04.04.2025 20:39 — 👍 1 🔁 0 💬 0 📌 0Want your favorite cartoon character to exist - no problem, here's a sentient being with super powers.
Want teleportation or portals - no problem.
Want an habitable planet in a black hole - sure can do.
And it would not be magic, well it might as well, as far as we understand it.
One branch of the wish engine scenario would be the ASI reality meltdown scenario:
Imagine having an machine that can do almost anything you can think off, not talking about uploading people to cloud here, but physical world. changes. Think really bonkers stuff that would gradually accumulate.
Option C: Trough the alignment process we create something capable beyond human comprehension, but constrained to follow our commands. We could call this the monkey with a gun scenario or better yet - the wish engine scenario.
04.04.2025 19:55 — 👍 0 🔁 0 💬 1 📌 0Option B: Something far more intelligent than humanity is aligned with humanities goals, and makes best decisions for us, but we are not satisfied because due to our limited capacity we can't comprehend the reasoning.
04.04.2025 19:51 — 👍 0 🔁 0 💬 0 📌 0The way I see it, assuming humanity even survives to reach true ASI, we could face the following challenges:
Option A: The most discussed scenario — Something vastly more capable than humanity is not aligned with our existence, deems us a nuisance, and clears the slate.
More in the thread.
Be careful what you prompt for in the future.
@recursifist.bsky.social
Yep, I was also thinking in terms of simple loops closely coupled with execution, were no complex decision making is involved :
sweep the perimeter
for each target
if optionA and optionB and optionN and is on blacklist:
do the action
Also, on a similar note: based on what’s publicly available in computer vision - like YOLO models, facial recognition libraries, and APIs - my best guess is that the core capabilities for autonomous defense systems have been around for years now. And my guess is that none of this even requires AGI.
21.03.2025 15:19 — 👍 0 🔁 0 💬 2 📌 0I'm not sure what's more unsettling - systems that make mistakes, or systems that don’t.
21.03.2025 15:17 — 👍 1 🔁 0 💬 1 📌 0"But I want this one, Mommy! I want it, I want it, I want it!"
02.03.2025 07:52 — 👍 1 🔁 0 💬 0 📌 0While we all probably agree that nobody should act pretentious, and while our society mostly values modesty, I would argue that these are two sides of the same coin we call ego. When doing creative work, both should be set aside to make room for untethered creativity.
07.02.2025 12:24 — 👍 1 🔁 0 💬 0 📌 0Also, I really like this chat-like casual format when posting on Bluesky.
A bit of a contrast to blog writing, while the same ideas can be reflected on in shorter form.
I feel like the Bluesky and blog formats complement each other well.