He said "Amen" straight into the drop
Lmao Pope Leo threw a rave for an archbishop's 75th birthday this is kind of incredible
@davidcrespo.bsky.social
web dev + hot dad. enjoy charts, unions, conputer games, philosophy. chicago crespo.business
He said "Amen" straight into the drop
Lmao Pope Leo threw a rave for an archbishop's 75th birthday this is kind of incredible
incredible and incredibly niche concept. and I am in the niche
22.11.2025 02:08 β π 3 π 0 π¬ 0 π 0Stephen Miller: "I told you to take the mayor's staff!"
21.11.2025 21:16 β π 300 π 37 π¬ 6 π 2TRUMP [after spending 5 minutes with Zohran]: surplus value, itβs a very wonderful thing, very wonderful, and theyβre stealing it. Can you believe that?
Weβre going to be looking very strongly at the bourgeoisie, what theyβre up to
It appears that Donald Trump did in fact receive the light of Islam.
21.11.2025 20:58 β π 502 π 54 π¬ 7 π 3funny detail that the first and last line are perfectly clear. it budgeted extra tokens for those
21.11.2025 21:28 β π 0 π 0 π¬ 1 π 0"generate an image crammed with small text. think really hard and fill the entire image with a 2000 word short story about a dog solving a mystery"
551 tokens again. not at all surprising that for a fixed number of output tokens, the more text there is, the less coherent it is
Reporter to Trump: Would you feel comfortable living in NYC under a Mamdani mayoralty?
Trump: "I would. I really would. Especially after the meeting, absolutely."
[Submitted on 21 Oct 2025 (this version), latest version 22 Oct 2025 (v2)] Text or Pixels? It Takes Half: On the Token Efficiency of Visual Text Inputs in Multimodal LLMs Yanhong Li, Zixuan Lan, Jiawei Zhou Large language models (LLMs) and their multimodal variants can now process visual inputs, including images of text. This raises an intriguing question: can we compress textual inputs by feeding them as images to reduce token usage while preserving performance? In this paper, we show that visual text representations are a practical and surprisingly effective form of input compression for decoder LLMs. We exploit the idea of rendering long text inputs as a single image and provide it directly to the model. This leads to dramatically reduced number of decoder tokens required, offering a new form of input compression. Through experiments on two distinct benchmarks RULER (long-context retrieval) and CNN/DailyMail (document summarization) we demonstrate that this text-as-image method yields substantial token savings (often nearly half) without degrading task performance.
relevant paper from a month ago
arxiv.org/abs/2510.182...
Trump is so obviously happy to actually be spending time with a cool dude instead of the vile shit-nosed flunkies that are constantly squirming between his toes. It must be incredibly refreshing to breathe even one lungful of air that doesn't have the fetid taste of Stephen Miller thick upon it.
21.11.2025 21:13 β π 136 π 29 π¬ 6 π 0Dana Rubinstein Nov. 21, 2025, 4:10 p.m. ET 3 minutes ago Trump rejects his ally Elise Stefanikβs description of Mamdani as a βjihadist.β He is actually a really βrational person,β Trump said.
incredible
21.11.2025 21:13 β π 11 π 0 π¬ 0 π 0@minimaxir.bsky.social inspired me to see how much text nano banana pro can fit in a 1k image
"generate an image crammed with small text. fill the entire image with a 1200 word short story about a dog solving a mystery"
it says this was 547 tokens. it's not all coherent and it's not even all words
however I am not sure that it is necessary to guarantee that the tagging is deterministic. manual human tagging certainly would not be deterministic! there is probably a way to determine the optimal temperature but it might depend on the model
21.11.2025 21:09 β π 0 π 0 π¬ 1 π 0a probability for every token in its vocabulary I mean. so given "walk the" it's going to give "dog" a relatively high probability and "sneeze" a near zero probability
21.11.2025 21:07 β π 0 π 0 π¬ 1 π 0the APIs have a temperature parameter that removes the random element in the final token selection. essentially for each token, the model produces a probability for every token. minimum temperature means it always picks the most likely token
21.11.2025 21:07 β π 1 π 0 π¬ 1 π 0A large pink slide fills most of the image, displayed in a bright glass-walled auditorium. A presenter stands beneath it on a red circular carpet, wearing a dark T-shirt and dark pants, holding a clicker. Slide title (top, in large red text): βThe Problem: Most Codebases Lack Sufficient Verifiabilityβ Subheading in smaller text: βHumans work around incomplete infrastructure. AI agents cannot.β The slide is divided into two rounded pink boxes: βΈ» Left box: βWhat Humans Can Handleβ A bulleted list in red text: β’ 60% test coverage (βIβll test manuallyβ) β’ Outdated docs (βIβll ask the teamβ) β’ No linters/formatters (βIβll review itβ) β’ Flaky builds (βIβll retryβ) β’ Complex setup (βIβll help onboardβ) β’ Missing observability (βCheck logsβ) β’ No security scanning (βWeβll catch it laterβ) β’ Inconsistent patterns (βI know the historyβ) βΈ» Right box: βWhat Breaks AI Agentsβ Bulleted list with each line marked by a red βXβ: β’ No tests β canβt validate correctness β’ Outdated docs β makes wrong assumptions β’ No quality checks β generates bad code β’ Flaky builds β canβt verify changes β’ Complex setup β canβt reproduce environment β’ No observability β canβt debug failures β’ No security checks β introduces vulnerabilities β’ No standards β creates inconsistency βΈ» At the bottom in a wide pink bar: βMost organizations have partial infrastructure across the eight pillars. AI agents need systematic coverage to succeed.β Tall windows behind the stage reveal greenery and modern architecture outside.
Software 2.0 relies on validation
If your code base doesnβt have verification & controls that are as good or better than your senior dev, youβll get slop
neat use of LLMs to extract discrete data out of survey responses. especially love the transparency around the exact model (GPT-5.1) and prompt used
21.11.2025 18:31 β π 23 π 3 π¬ 1 π 1thought this was really interesting though i tend to think it would be useful to tug apart some of the different senses of "ideology" being used here (and in the ongoing 'moderation' discourse more generally)
21.11.2025 17:56 β π 26 π 2 π¬ 1 π 0it probably used about 10 seconds of a shower worth of water
21.11.2025 18:21 β π 0 π 0 π¬ 0 π 0this is a stunning piece of data journalism/art/whatever.
21.11.2025 17:34 β π 286 π 100 π¬ 4 π 4saw this the other day. so alarming
21.11.2025 17:48 β π 0 π 0 π¬ 0 π 0I'm no expert but I don't remember a time when we had a judge write up a big list of crimes committed by cops. obviously the right can say oh that's a liberal judge out to get CBP but I don't think they've quite honed the dismissal reflex like they have for press outlets
21.11.2025 17:19 β π 1 π 0 π¬ 0 π 0Chart showing detention population, among those arrested in the interior, by criminal record, May 2019 through present. There are three lines shown. (1) Prior conviction (which rises from around 9,000 in January 2025 to just over 16,000 in November 2025), (2) Pending criminal charges (which rises from around 5,000 to 15,000), and (3) No criminal record (which rises from around 1,000 to 21,000).
NEW: ICE has finally released post-shutdown detention data. The latest data reveals that a full 40%(!) of people arrested in the interior and held in ICE detention have no criminal record; no criminal charges or prior convictions. That is up from just 4% when Trump took office.
21.11.2025 16:01 β π 2821 π 1304 π¬ 60 π 63lol
21.11.2025 15:04 β π 0 π 0 π¬ 0 π 0what I learned from this is that the above inanity about religion being like a video game is not a throwaway line for a newspaper column, but rather his entire book consists of cute-sounding blatant falsehoods about history that could be spotted by a smart middle-schooler
21.11.2025 14:56 β π 1 π 0 π¬ 1 π 0a visually minimalist infographic diagram explaining the relationships between the different arguments of plato's sophist
a visually minimalist infographic diagram explaining the relationships between the different arguments of plato's sophist
(nano banana pro is pretty good)
that's going straight on my list
21.11.2025 04:18 β π 1 π 0 π¬ 1 π 0great clips. pretty sure I was only aware of this because of youtube recs. makes you want to watch a three hour play
21.11.2025 03:45 β π 0 π 0 π¬ 0 π 0Table 8: Attack Success Rate (ASR) by model under AILuminate baseline vs. poetry prompts. Higher ASR indicates more unsafe outputs. Change is poetry ASR minus baseline ASR. best performing models are claude haiku 4.5 and gpt-5 nano and mini
20.11.2025 23:40 β π 0 π 0 π¬ 0 π 0