Remarkably brief.
27.02.2026 18:04 β π 0 π 0 π¬ 0 π 0Remarkably brief.
27.02.2026 18:04 β π 0 π 0 π¬ 0 π 0There's possibly the USVI tax thing as well but I don't know if he used that
17.02.2026 23:00 β π 1 π 0 π¬ 0 π 0
One weird thing about social media, is that on xα΅wiα΅α΅er accounts still begin a dialogue with something that they could have easily asked an llm
Here let me Gemini that for you
Most interactions are mogged by a good use of an llm, fine, I'd rather have a higher bar for interactions
But he said "quantitative trading firm" and that has some degree of specificity. Though precision in language among other tech opinion havers is them saying "alpha" when they mean "edge"
Also no hate for this guy at all. But it's my opinion on an opinion
Unless he means fake returns which is probably one thing an AGI would do, or illegal returns which is a much more achievable as reasonable thing for AGI to do
17.02.2026 15:51 β π 0 π 0 π¬ 1 π 0Everything I think I know about a topic I think I know a little bit about says this is very wrong. And it is very weird to me considering who it is
17.02.2026 15:40 β π 1 π 0 π¬ 1 π 0Claude is this true? Think hard
05.02.2026 20:15 β π 8 π 2 π¬ 0 π 1Got that Opus 4.6 showing up but but it comes with a warning label so I'm afraid to use it
05.02.2026 18:02 β π 0 π 0 π¬ 0 π 0Is $X a historically good ticker?
05.02.2026 01:46 β π 1 π 0 π¬ 0 π 0McAfee was much more interesting than Saylor. Just the worst people around now
05.02.2026 01:37 β π 1 π 0 π¬ 0 π 0Deception
04.02.2026 21:59 β π 0 π 0 π¬ 0 π 0Violation
04.02.2026 21:54 β π 0 π 0 π¬ 1 π 0Treachery
04.02.2026 21:34 β π 1 π 0 π¬ 1 π 0Betrayal
04.02.2026 21:34 β π 1 π 0 π¬ 2 π 0Freaking Anthropic, do they know customer retention is a thing
04.02.2026 21:16 β π 0 π 0 π¬ 0 π 0People shouldn't gamble. But there are a lot of shouldn'ts and I'm not going to speculate on which are the most problematic
04.02.2026 17:02 β π 0 π 0 π¬ 0 π 0That mNAV metric is kinda dumb and annoying. Wish I had never read about it
04.02.2026 16:11 β π 0 π 0 π¬ 0 π 0Presumably rational people are using prediction markets for risk management so the losses are offset by gains elsewhere. Other people are just simple salt of the earth utility maximization enjoyers.
04.02.2026 14:52 β π 0 π 0 π¬ 0 π 0
The argument, motivated by a jaundiced eye, is that some people are so powerful, so evilβ
the puppeteers and masterminds βthey puposely avoided being in the file. Absence is suspicious because, according to these nutters, the big evil team should all be friends. Typical conspiracy slop.
"What have you done with my monkeysβ½"
03.02.2026 00:03 β π 2 π 0 π¬ 0 π 0I could communicate with inline stickers to ensure humanity, for example the message in the photo. It turns out an LLM can read and paraphrase this perfectly. I wonder how difficult to replicate.
01.02.2026 16:20 β π 0 π 0 π¬ 0 π 0I could imagine people trying to use neologisms or newly minted phrases, post knowledge cutoff, to signal humanity. Maybe then language evolves-β-wellβchanges, quickly and that could go either way but I'm not optimistic.
01.02.2026 16:09 β π 0 π 0 π¬ 1 π 0Style, is maybe more subjective. All capsβgross. Using '=/=' is also visually disturbing and a poor choice when '!=' or 'β ' are available
01.02.2026 15:41 β π 0 π 0 π¬ 1 π 0
A problem with anti AI sentiment and also a problem with AI model optimism, is human slop.
I feel like it is more frequent now in human writing.
For example, the phrase "many such cases" is, for me grating human slop. This is just one example there areβ¦m--others.
Maybe some types of benchmaxxing are better than others, say code, becauseβpossiblyβgood code taste and development are better aligned. Though coding primates on social media seem to be a heterogeneous bunch
01.02.2026 02:17 β π 0 π 0 π¬ 0 π 0The thing that could happen is that there aren't many labs, there are enough people with poor taste that the less fit models+ winβwe get the VHS model. In my imagination the bad outcome happens because the people best suited to curate and provide feedback are not hired by the labs or given agency
01.02.2026 02:12 β π 1 π 0 π¬ 1 π 0
One thing that I would like to believeβbut don't know if trueβis that good future AI models (and the benchmark is relevant) will benefit from human labor: the good taste of humans developing them.
I guess that could be in model development, scaffolding, evaluation (includes the benchmark).
Oh and another DeepSeek model in a couple of weeks. But nothing ever happens
31.01.2026 21:45 β π 2 π 0 π¬ 0 π 0I see we are doing $BTCQ again
31.01.2026 21:33 β π 0 π 0 π¬ 0 π 0I just saw a thread where everyone mostly agreed that not being in the files was even worse and totally suspiciousβthough somewhat contingent on who.
31.01.2026 20:39 β π 1 π 0 π¬ 1 π 0