Ego Architecture in the Digital Age: A Systems Analysis
31.12.2025 05:38 β π 0 π 0 π¬ 0 π 0@ary-hh.bsky.social
Theoretical Physics to Quantum AI Ethicist | Coffee & deep thought in search for meaning between qubit & stars | Open for Humane AI & High-tech Collaboration |
Ego Architecture in the Digital Age: A Systems Analysis
31.12.2025 05:38 β π 0 π 0 π¬ 0 π 0So let's break it down using physics: life, which continues to progress, must obey the law of conservation of energy. It can't overclaim, but it can't lose momentum either.
31.12.2025 04:01 β π 0 π 0 π¬ 0 π 0Losing loved ones, watching the AI ββhype continue to reach absurd levels... I realized: time is like entropy, it can't be turned back.
31.12.2025 04:01 β π 0 π 0 π¬ 1 π 0The differences about Artificial Intelligence vs Machine Learning vs Deep Learning
30.12.2025 15:58 β π 0 π 0 π¬ 0 π 0ChatGPT vs Gemini vs Claude vs Grok ππ»
30.12.2025 11:53 β π 0 π 0 π¬ 0 π 0If you'd like to read it, please download it from the link:
"THE GOD ALGORITHMβWhen Logic Learns to Lie"
drive.google.com/file/d/1m7M8...
I'm sharing a free ebook I wrote myself while learning to write (English version) about AI algorithms and human consciousness.
After reading, I'm open to discussions or feedback.
I
"THE GOD ALGORITHMβWhen Logic Learns to Lie"
Harari is right: AI is adept at detecting emotions through data patterns (often 80-95% accurate), but humans excel because they actually feel the emotions themselves.
AI is merely a mirror, not the one with the feelings.
According to Jin, this "saved" Google, thanks to early vision like TPU (2013), the DeepMind acquisition, and the OpenAI competition.
The response emphasized the founder's passion as key to Google's long-term innovation in AI.
Sergey Brin's return to "founder mode" at Google AI was a good move.
Sundar Pichai quoted Brin as saying he coded frequently and together they observed the loss curve of AI models.
The bottom line: CATERYA is not a product. It's an invitation for all of us to collectively define "trustworthy AI." GitHub is in my bio if you want to contribute.
11.12.2025 14:57 β π 0 π 0 π¬ 0 π 0- I want this to be a global standard born from the community, not from one person or one company.
11.12.2025 14:57 β π 0 π 0 π¬ 1 π 0- The modular toolkit forces us to remain flexible: it can be used by small labs, startups, and even regulators without being forced to use my own "official" UI or version.
11.12.2025 14:57 β π 0 π 0 π¬ 1 π 0- Open-source is the only way to prevent this evaluation from being controlled by a handful of giants. Fork it, audit it, improve it, that's my principle from the start.
11.12.2025 14:57 β π 0 π 0 π¬ 1 π 0- I believe that AI ethics should be measured like the laws of physics: quantifiable, repeatable, and testable by anyone, not a qualitative checklist or a closed app.
11.12.2025 14:57 β π 0 π 0 π¬ 1 π 0Why did I create CATERYA as an open-source toolkit, not a ready-to-use application?
github.com/AryHHAry/CAT...
- Choose an appropriate programming language (Python or JavaScript are popular for beginners)
- Master basic concepts (variables, loops, conditions, algorithms)
- Practice creating simple projects (To-Do Lists, personal websites)
- Practice consistently and utilize the community
When I first started learning programming/coding... I forgot. I was forced to because I took Algorithms & Programming (semesters 3 & 4) and then Computational Physics (semesters 5 & 6).
If you're studying independently, try:
- Determine your goal (web, application, data science)
CATERYA is a new open-source Python toolkit (launched Dec 2025) for quantifiably measuring AI trustworthiness through four pillars:
- Bias & Fairness
- Interpretability
- Robustness
- Repository Provenance
github.com/AryHHAry/CAT...
This isn't Fairlearn 2.0, this is Fairlearn + LIME + Adversarial Robustness + Git Provenance in one breath.
And it's all open-source, Docker-ready, and was born yesterday.
2025 finally has a tool that assesses AI not just "how fair it is," but "how trustworthy it is from end to end", including whether its repositories are truly human-auditable.
github.com/AryHHAry/CAT...
Elon Musk dreams of humans being free from work because robots and AI are giving everything away for free.
But at night he wakes up sweating, afraid that the same AI will wipe us all out.
So: AI is heaven or the grave?
He created both himself, and he still can't decide.
The question is:
If robots make everything cheaper but humans have no income, who will buy the goods?
Capitalism without consumers is like a Ferrari without fuel >> fast, beautiful, but dead on the spot.
Welcome to the real bubble: not AI, but our outdated economic system.
Hinton: "AI isn't a bubble, but society will collapse if there's no redistribution of wealth."
Efficiency increases 10x, the workforce is cut by 70%, & the consumer market collapses because people have no money.
What's left are a handful of giant corporations & increasingly wealthy billionaires.
Or are we afraid because we realize that what keeps AI stupid in the little things is.. us.
Which is more dangerous: AI thatβs too smart, or humans who are too comfortable playing dumb?
Our cars now have brains 50,000x more powerful than the computers that landed humans on the Moon.. but they still canβt understand that a 10-minute turn signal means βsorry, wrong lane.β
If we trust AI that is smart to break in the rain, why are we afraid it will take over the world?
The quantum simulator is born.
The question is: what will be the next strange phase we encounter.. what will make classical physics cry?
NISQ is no longer βnoisy junkβ. It turns out itβs noisy enough to give birth to phases of matter that donβt exist in classical physics: time crystals that tick forever and many-body localization that resists heat death.
09.12.2025 12:57 β π 0 π 0 π¬ 1 π 0The big question:
If pure RL is enough to make models "think," what's the point of those trillions of SFT tokens we've been chasing all this time? 2026 will be brutal.
DeepSeek-R1 just proved: world-class reasoning can be born from pure Reinforcement Learning (RL), without supervised fine-tuning or OpenAI-style synthetic data.
Open-source, 1.5B-70B, beats o1 & Claude 3.5 in MATH and code.