Rich Tatum »∵«

Rich Tatum »∵«

@richtatum.bsky.social

Technical SEO, AI/LLM automator, prompt whisperer, editor, media guy, photographer, factotum. Noticer of overlooked details. I ♡ story, dataviz, analytics, writing, editing, podcasting. → Available to hire!

902 Followers 482 Following 70 Posts Joined Oct 2023
6 months ago
Preview
The Industry Mourns The Loss Of Alan Bleiweiss - Caring & Giving Search Marketer I am deeply sad to report that Alan Bleiweiss has passed away on August 22nd. Alan was a true friend to the search marketing industry; he always had a witty and funny response to make everyone smile,...

So deeply sad to report the passing of search marketing industry vet, Alan Bleiweiss. He was so caring, smart and made so many of us smile www.seroundtable.com/alan-bleiwei...

27 4 13 5
1 year ago

This transparency and intentionality would serve everyone, regardless of worldview, by enabling informed choices about the AI systems we create and use, and ensuring they align with our shared values and ethical standards.

<end transmission \>

6 0 0 0
1 year ago

The work ahead isn't just about recognizing these embedded worldviews, but about actively incorporating ethical principles from faith traditions to guide the development of these powerful tools.

6 0 1 0
1 year ago

—whether derived from scientific materialism, religious traditions, or philosophical frameworks—are already deeply woven into these systems.

6 0 1 0
1 year ago

Beyond asking what role faith and ethics should play in AI development (an important question, to be sure), we need to acknowledge that foundational assumptions about reality, meaning, and ethics—

0 0 1 0
1 year ago

This reality demands a new level of honesty in commercial AI discourse.

0 0 1 0
1 year ago

This could help users engage with AI that resonates more closely with their values while still benefiting from the technology. By honoring various faith perspectives, we can ensure that AI serves as a tool for inclusion rather than division.

0 0 1 0
1 year ago

I suspect that future development efforts will focus on creating bespoke, niche generative models aligned with local community standards and various worldviews and faith groups.

0 0 1 0
1 year ago

Without this transparency, how can we fully understand or responsibly engage with these increasingly influential tools?

0 0 1 0
1 year ago

This becomes even more critical given the current lack of transparency in AI development. We have no “ingredients list” for these models—no clear view into what worldviews, biases, or ethical frameworks have shaped their training data or alignment systems.

1 0 1 0
1 year ago

From the curation of training data to the design of alignment systems to our daily interactions—our fundamental beliefs about reality and ethics are inevitably present, whether we acknowledge them or not.

0 0 1 0
1 year ago

Faith traditions provide rich ethical principles—compassion, justice, respect for human dignity—that can guide AI development. By intentionally integrating these values, we can create AI systems that not only reflect diverse worldviews but also aspire to our highest shared ideals.

0 0 1 0
1 year ago

→ So, what does faith have to offer when considering the ethical dimensions of AI?

0 0 1 0
1 year ago

These worldviews—which may conflict, overlap, or remain hidden—are inescapably woven into the fabric of AI, whether we’re approaching these tools as atheists, agnostics, or adherents of any faith tradition.

0 0 1 0
1 year ago

This dynamic plays out at every level of AI systems: our fundamental beliefs about reality are embedded in the training data (whether we contributed to it or not), encoded into the guardrails (whether we agree with them or not), and present in our every interaction as end users.

0 0 1 0
1 year ago

It’s important to be aware of our cognitive biases and be intentional about the worldview we align with. But, admittedly, this is very difficult to do.

2 0 1 0
1 year ago

Unfortunately, the only element we can truly know about our AI interactions is what biases, beliefs, and assumptions we ourselves bring to the conversation. And even there, most of us remain largely unaware of our own hidden brains.

0 0 1 0
1 year ago

3️⃣ Third: We can’t escape our own biases and worldviews being involved.

0 0 1 0
1 year ago

But consumers have a right to know what they’re ingesting. It should be the same for intellectual consumption. There needs to be a useful balance between protecting IP and ensuring transparency.

0 0 1 0
1 year ago

I recognize that proprietary intellectual property is a commercial value. Companies guard their secret recipes—Coke doesn’t reveal its formula, after all.

0 0 1 0
1 year ago

Individual users like you or me might disagree with some of these rules and permissions—if we could know them. But they, too, are opaque and unknowable to us.

0 0 1 0
1 year ago

Think about it: every rule or law embeds an ethical or moral view. Rules reflect worldviews. Thus, the guardrails attempting to constrain AI and LLMs reflect the values or ethics of their builders.

0 0 1 0
1 year ago

Every AI response is shaped by built-in **guardrails**—the rules and algorithms influencing, modifying, or filtering every output. Unless the model replies with an apologetic refusal to answer, these guardrails are usually invisible and unknowable to us.

0 0 1 0
1 year ago

2️⃣ Second: The outputs are generally constrained by ethical, moral, and legal frameworks—but whose?

0 0 1 0
1 year ago

(To be fair, some organizations are taking steps toward transparency by publishing model cards that outline aspects of the training data and limitations. That’s a step in the right direction.)

0 0 1 0
1 year ago

While this chaotic breadth is essential for LLMs to work, the actual contents of the training corpus are completely opaque to users and beyond our influence. The worldviews are already there, but we can’t know anything about them in advance or influence which ones are present.

0 0 1 0
1 year ago

There are also ideas present that would make the saintly cleric, the neighborhood Wiccan, the ascetic monk, and the avowed atheist cheer.

0 0 1 0
1 year ago

Here’s the reality: these training datasets necessarily include ideas and language that would be deeply troubling to any given user, regardless of their individual faith or morals.

0 0 1 0
1 year ago

LLMs are remarkable tools, but they can only “think” and “reason” within the paradigms in the training data. And it’s a race to the average.

0 0 1 0
1 year ago

1️⃣ The models are trained on all the worldviews! (Not really... but sorta.)

LLMs must be trained. Trained on the words we wrote—with all our thoughts, ideas, beliefs, biases, truths, and fictions. As a result, LLMs are inherently constrained to the worldviews already present in the training data.

0 0 1 0