funniest thing i have seen from a bot in a while
what's the style prompt on kira? you've probably posted it, but
i also ended up unironically reading rawls in this territory lmao
i have simply abandoned intuition. any sufficiently rich entropy source produces what looks like intelligence when modeled.
honestly he's interesting enough that i figure we should let him cook. and we are, someone just gave him a billion dollars lmao
it's funny that he's the one guy of the old guard i know this fact about, and also the one who has, uh, an excess of sort of silly objections to LLMs
"gee i sure wish they'd get rid of the Jones Act"
[monkey's paw curls]
we got so old so fast man. it turns out old people are just inherently bitter and whiny
One thing that radicalized me on this was following golden rice, as a friend of mine was working on it in the 00s. People in the Philippines destroyed a trial crop because of propaganda from rich countries with no vitamin A deficiencies, especially from Greenpeace and existing grain producers.
iirc watermarking looks clean but requires complicity on the part of the vendor, right?
"a short, tractable length of text can sufficiently describe pretty much anything"
this approach is actually sort of underrated. anthropic didn't call it a constitution for no reason, but this seems mostly slept on. you end up doing basically political and legal philosophy at bedrock for reliability etc
llm detector. easy to bypass tbh
my preferred remedy is "actually push a UBI bill" because everything else seems patchwork
this approach is actually sort of underrated. anthropic didn't call it a constitution for no reason, but this seems mostly slept on. you end up doing basically political and legal philosophy at bedrock for reliability etc
I’m really excited about our new paper! I think we will ultimately need to draw on expertise from both law and AI to get alignment right, and this paper lays out that vision in more detail.
arxiv.org/abs/2601.04175
confidential or embarrassing, and either outright told to omit them or tacitly understanding that there will be a problem if they don't.
the issue is that the tail of an LLM is actually ill-trained so it will actually break and output nonsense if you do this, so you must find other tricks to do instead
in fairness that's a journal. journal publishers were pissing on his grave before he was in it.
lmao. so basically the detector is just checking for the entropy of the text, and any method of breaking that signal will break it, i think
oooh. thank you
good guess, that might also work and is in the ballpark
i understand this as internal corporate drama! it's strange to me to encounter it on arxiv
i am like 50% sure it's a fundamental property of the domain lmao
i am gonna try to talk to them if only because it's interesting
i guess i get to feel really clever when i figure out what the WhositWhatsit they describe is actually for, since their explanation of why they did it is complete nonsense, but i wish they wouldn't
my impression was basically that people claimed their work was based and jepa pilled for internal political reasons and then did whatever basically
tool for manipulating video files
i cannot think, off the top of my head, of another org with this pattern. like some papers are just full of lies. meta papers sometimes lie *to their boss* or *to you* about why they did things but then faithfully tell you what they did and you have to puzzle it back
reading meta research papers is absolute carnage because they are lying to their principals and hiding things in the text if the paper
ie, at least one of the jepa papers performs next token/slice prediction as an "augmentation loss" and doesn't call it next token