So deeply sad to report the passing of search marketing industry vet, Alan Bleiweiss. He was so caring, smart and made so many of us smile www.seroundtable.com/alan-bleiwei...
04.09.2025 00:53 — 👍 26 🔁 4 💬 13 📌 5@richtatum.bsky.social
Technical SEO, AI/LLM automator, prompt whisperer, editor, media guy, photographer, factotum. Noticer of overlooked details. I ♡ story, dataviz, analytics, writing, editing, podcasting. → Available to hire!
So deeply sad to report the passing of search marketing industry vet, Alan Bleiweiss. He was so caring, smart and made so many of us smile www.seroundtable.com/alan-bleiwei...
04.09.2025 00:53 — 👍 26 🔁 4 💬 13 📌 5This transparency and intentionality would serve everyone, regardless of worldview, by enabling informed choices about the AI systems we create and use, and ensuring they align with our shared values and ethical standards.
<end transmission \>
The work ahead isn't just about recognizing these embedded worldviews, but about actively incorporating ethical principles from faith traditions to guide the development of these powerful tools.
04.12.2024 14:48 — 👍 6 🔁 0 💬 1 📌 0—whether derived from scientific materialism, religious traditions, or philosophical frameworks—are already deeply woven into these systems.
04.12.2024 14:48 — 👍 6 🔁 0 💬 1 📌 0Beyond asking what role faith and ethics should play in AI development (an important question, to be sure), we need to acknowledge that foundational assumptions about reality, meaning, and ethics—
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0This reality demands a new level of honesty in commercial AI discourse.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0This could help users engage with AI that resonates more closely with their values while still benefiting from the technology. By honoring various faith perspectives, we can ensure that AI serves as a tool for inclusion rather than division.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0I suspect that future development efforts will focus on creating bespoke, niche generative models aligned with local community standards and various worldviews and faith groups.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0Without this transparency, how can we fully understand or responsibly engage with these increasingly influential tools?
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0This becomes even more critical given the current lack of transparency in AI development. We have no “ingredients list” for these models—no clear view into what worldviews, biases, or ethical frameworks have shaped their training data or alignment systems.
04.12.2024 14:48 — 👍 1 🔁 0 💬 1 📌 0From the curation of training data to the design of alignment systems to our daily interactions—our fundamental beliefs about reality and ethics are inevitably present, whether we acknowledge them or not.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0Faith traditions provide rich ethical principles—compassion, justice, respect for human dignity—that can guide AI development. By intentionally integrating these values, we can create AI systems that not only reflect diverse worldviews but also aspire to our highest shared ideals.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0→ So, what does faith have to offer when considering the ethical dimensions of AI?
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0These worldviews—which may conflict, overlap, or remain hidden—are inescapably woven into the fabric of AI, whether we’re approaching these tools as atheists, agnostics, or adherents of any faith tradition.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0This dynamic plays out at every level of AI systems: our fundamental beliefs about reality are embedded in the training data (whether we contributed to it or not), encoded into the guardrails (whether we agree with them or not), and present in our every interaction as end users.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0It’s important to be aware of our cognitive biases and be intentional about the worldview we align with. But, admittedly, this is very difficult to do.
04.12.2024 14:48 — 👍 1 🔁 0 💬 1 📌 0Unfortunately, the only element we can truly know about our AI interactions is what biases, beliefs, and assumptions we ourselves bring to the conversation. And even there, most of us remain largely unaware of our own hidden brains.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 03️⃣ Third: We can’t escape our own biases and worldviews being involved.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0But consumers have a right to know what they’re ingesting. It should be the same for intellectual consumption. There needs to be a useful balance between protecting IP and ensuring transparency.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0I recognize that proprietary intellectual property is a commercial value. Companies guard their secret recipes—Coke doesn’t reveal its formula, after all.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0Individual users like you or me might disagree with some of these rules and permissions—if we could know them. But they, too, are opaque and unknowable to us.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0Think about it: every rule or law embeds an ethical or moral view. Rules reflect worldviews. Thus, the guardrails attempting to constrain AI and LLMs reflect the values or ethics of their builders.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0Every AI response is shaped by built-in **guardrails**—the rules and algorithms influencing, modifying, or filtering every output. Unless the model replies with an apologetic refusal to answer, these guardrails are usually invisible and unknowable to us.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 02️⃣ Second: The outputs are generally constrained by ethical, moral, and legal frameworks—but whose?
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0(To be fair, some organizations are taking steps toward transparency by publishing model cards that outline aspects of the training data and limitations. That’s a step in the right direction.)
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0While this chaotic breadth is essential for LLMs to work, the actual contents of the training corpus are completely opaque to users and beyond our influence. The worldviews are already there, but we can’t know anything about them in advance or influence which ones are present.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0There are also ideas present that would make the saintly cleric, the neighborhood Wiccan, the ascetic monk, and the avowed atheist cheer.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0Here’s the reality: these training datasets necessarily include ideas and language that would be deeply troubling to any given user, regardless of their individual faith or morals.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 0LLMs are remarkable tools, but they can only “think” and “reason” within the paradigms in the training data. And it’s a race to the average.
04.12.2024 14:48 — 👍 0 🔁 0 💬 1 📌 01️⃣ The models are trained on all the worldviews! (Not really... but sorta.)
LLMs must be trained. Trained on the words we wrote—with all our thoughts, ideas, beliefs, biases, truths, and fictions. As a result, LLMs are inherently constrained to the worldviews already present in the training data.