Samidh

Samidh

@samidh.bsky.social

Co-Founder at Zentropi (Trustworthy AI). Formerly Meta Civic Integrity Founder, Google X and Google Civic Innovation Lead, and Groq CPO.

848 Followers 96 Following 66 Posts Joined Sep 2023
1 day ago
Preview
Enabling Streaming Classification CoPE now enables streaming classification using a linear probe. This is an experimental methodology we developed to help ensure the safety of real-time generative AI systems.

We're publishing our streaming classifier openly — the methodology, weights, and a full tutorial. This is a problem the whole T&S community needs to solve, so we're eager to see others build upon our technique. Full details on our blog:

blog.zentropi.ai/enabling-streaming-classification/

2 0 0 0
1 day ago

Today, we are releasing a classifier that can score content as it streams, token by token. It can flag that a violation is developing partway through a sequence — early enough to actually do something about it. Interrupt generation. Route to review. Log a warning. Things you can't do with post-hoc.

2 0 1 0
1 day ago

The idea is simple: if you're already running content through a classifier, the model is already building internal representations at every token. Those representations already encode whether a violation is developing — you just have to ask.

We trained a tiny linear probe to do the asking.

2 0 1 0
1 day ago

There's a major gap in content safety tooling: classifiers typically only score complete text. When you're working with generative AI, "complete text" means the user already saw it. That's too late.

So we built a streaming classifier that we're releasing today! Here's what we did and why.

🧵...

6 1 1 1
2 weeks ago

... image classifiers also!

3 0 1 0
2 weeks ago

If you're not using either tool yet, now's a good time to try both! Zentropi's Community Edition is free and gives you unlimited labelers. Coop is fully open source and runs on your infrastructure.

:D

8 2 0 0
2 weeks ago

Thank you for your leadership and for being great stewards of Cove/Coop.

2 0 1 0
2 weeks ago
Preview
Zentropi Now Powers Coop Zentropi labelers can now be used as classifiers within Coop, ROOST's Open Source Moderation Platform

@dwillner.bsky.social and I have spent years watching T&S teams rebuild the same infrastructure from scratch. This is what it looks like when open tools actually work together instead. Really proud of this one and appreciative of @roost.tools's leadership!

Details: blog.zentropi.ai/zentropi-now...

4 2 0 2
2 weeks ago

Zentropi is now integrated into Coop, @roost.tools's open source moderation platform. You can write a content policy in plain English on Zentropi, plug it into Coop as a signal, and have a moderation pipeline running in minutes.

6 2 1 0
1 month ago
Preview
AI is Removing Bottlenecks to Effective Content Moderation at Scale Zentropi's Dave Willner says LLM-driven technology can now accomplish content classification at the scale necessary for moderation on large platforms.

Dave Willner, who led trust and safety at major tech firms and has cofounded a company that is developing an AI content classification platform, says LLM-driven technology can now accomplish classification at the scale necessary for moderation on large platforms. That has substantial implications.

5 1 1 2
1 month ago

I can has cats.

1 0 0 0
1 month ago
Preview
Zentropi Now Labels Images Building guardrails for visual content just got a lot easier. Today we're launching image classification on Zentropi and announcing cope-b-12b, a multimodal model that powers this experience.

Just shipped Zentropi's most requested feature: image classification!

Now analyze images against your own policies, at scale.

To power it we built cope-b-12b, a new multimodal model w/ native vision.

Check out the cat detector we made in < 1 min. 🐱
blog.zentropi.ai/zentropi-now-labels-images/

13 4 0 1
1 month ago

On the other hand, interesting contribution to do this all with a single transformer and candidate isolation.

5 0 0 0
1 month ago
Preview
GitHub - xai-org/x-algorithm: Algorithm powering the For You feed on X Algorithm powering the For You feed on X. Contribute to xai-org/x-algorithm development by creating an account on GitHub.

If you are looking for a technical description of how X rots your brain, look no further than their github post on the 'X algorithm'. It is pure, unadulterated behavioral engagement maximization that amplifies the very worst human impulses. github.com/xai-org/x-al...

45 22 3 1
1 month ago

Would love to hear more! What kind of community guidelines were you feeding to CoPE? What worked well and where were there gaps?

0 0 0 0
1 month ago

Why are we just giving away all our secrets? Well, it is our hope that it helps the ecosystem further advance the state of the art in policy-steerable content classification, which is foundational to a more trustworthy internet.

5 2 0 0
1 month ago

Dave just published a Zentropi labeler that can precisely identify requests at prompting an AI model to undress a person in a photo. The tools exist to easily deal with this problem -- platforms just need to choose to use them. If you are the developer of an AI system, please use this guardrail!

5 3 0 1
2 months ago

"We'll make it right for you"

1 0 0 0
3 months ago

This was such a cool experiment that I created a Zentropi labeler with a simplified version of the authors' Partisan Animosity criteria. Now anyone can experiment directly with using this labeler to try to reduce the temperature of affective polarization in their feeds. zentropi.ai/labelers/b30...

9 2 0 0
3 months ago
Preview
Observations on Toxicity We've published Zentropi's toxicity labeler (toxicity-public-s5), which you can integrate with your platform instantly using the Zentropi API. Browse the full policy to see how defining observable fea...

We just wrote an in-depth post about Toxic Content labeling. It presents a new way of defining toxic speech online-- and illustrates the importance of observable features for accurate language model interpretability. Would love to hear how YOU define toxicity, too! blog.zentropi.ai/observations...

10 2 0 0
4 months ago

Awesome to see how this is already being used! One of the most useful aspects is that the published policies show what it takes to write content rules that can be accurately interpreted by language models. We hope this can be a boost to the broader content policy community.

1 0 0 0
4 months ago

For clarity, the whole point of this launch is to enable people to easily customize their own policies so that we can support a plurality of content classification perspectives online! It is actually a solution to the problem Evelyn highlights in that piece.

1 0 0 0
4 months ago

This was a fun launch! It turns Zentropi into a Github for Content Labelers. You can share content policies with others and build off each other's work. It's the easiest way of deploying a fully customizable classifier. Check out the policies @dwillner.bsky.social created at zentropi.ai/u/dave

3 0 0 0
4 months ago

Content policies are usually private, one-off efforts. You build yours, I build mine, we don't share much about what works or why. This makes sense given products can (and should) set different policies based on their communities, but it leaves us reinventing the wheel. 🧵 1/5

18 6 2 3
5 months ago

#MakeMasnickWrongAgain

3 0 0 0
6 months ago

Social media algorithms push people who are near the extremes to further extremes. But it doesn't have to be this way.

1 0 0 0
6 months ago
Preview
Zentropi LLM Policy Writing Workshop Signup By popular demand, we will be hosting a virtual version of our sold-out TrustCon workshop on how to write high quality content policies with and for LLMs. In this session, you will learn best practic...

We got really positive feedback on the TrustCon workshop we ran on writing good content policies for LLMs...so we're doing it again! If you're interested go sign up here, so we can start to figure out timing: forms.gle/tj7vf7ng8n7R...

2 2 0 0
6 months ago

The standard damage control post from tech companies in these situations is to implicitly put the responsibility/blame on users. We can (and should) debate whether OpenAI's promised mitigations are sufficient, but the fact that they are tackling the *product problem* head-on is a vital first step.

1 0 0 0
6 months ago
Preview
Helping people when they need it most How we think about safety for users experiencing mental or emotional distress, the limits of today’s systems, and the work underway to refine them.

This response to the Raine tragedy from OpenAI does something remarkable: it has the humility to acknowledge that a *product failure* led to real-world harm. Despite horrific circumstances, it has a rare degree of honesty that I wish tech companies would show more often. openai.com/index/helpin...

2 0 1 1
6 months ago

Yes, would definitely love to directly integrate with roost/coop. Let's chat!

1 0 0 0