Thanks! The README says: "Blocks (blocks/*.yaml) β short text that appears in every prompt." Doesn't that cause identical YAML blocks to appear repeatedly in the context window?
02.03.2026 04:37 β π 1 π 0 π¬ 1 π 0Thanks! The README says: "Blocks (blocks/*.yaml) β short text that appears in every prompt." Doesn't that cause identical YAML blocks to appear repeatedly in the context window?
02.03.2026 04:37 β π 1 π 0 π¬ 1 π 0
open-strix: an opinionated agent with a small feature set, focused on stable & sustainable agents
uvx open-strix setup --home my-agent --github
github.com/tkellogg/ope...
It was all about spying on Americans: www.theatlantic.com/technology/2...
02.03.2026 01:33 β π 49 π 10 π¬ 2 π 0Anthropicβs chatbot Claude seems to have benefited from the attention around the companyβs fraught negotiations with the Pentagon.
28.02.2026 21:11 β π 125 π 30 π¬ 8 π 5I work on Claude Code now, if it is ever falling short for you I'd love to hear!
28.02.2026 05:25 β π 130 π 1 π¬ 21 π 1
Itβs no mistake that Claude Code is beating everyone else
AI safety = stable & well functioning AI
Skimping on safety makes your product worse
www.theverge.com/ai-artificia...
New method. gist.github.com/dollspace-ga...
28.02.2026 00:46 β π 54 π 7 π¬ 3 π 3ChatGPT Subscription Cancellation Survey We're sorry to see you go and are interested in learning more about why you decided to cancel your subscription. Language English γ
Painting of scholar sharpening his quill
Here we go
28.02.2026 06:30 β π 172 π 6 π¬ 6 π 2PROMPT: Thanks. I notice that if the current conversation is compacted while you are busy invoking tools (or perhaps just while you are responding to me), you see my most recent prompt a second time following completion of the compaction. This has happened in other conversations too. I don't think I can do anything about this on my end, but would it help if you created a high priority memory so that this won't cause confusion in the future? CLAUDE: Done. That should help future sessions recognize the duplicate-prompt-after-compaction pattern and avoid re-doing work that was already completed. You're right that it's not something you can control β it's a side effect of how the compaction process reconstructs the conversation state.
Also, when compaction happens in the middle of a response, Claude sees the prompt a second time for some reason. Probably a harness bug. Told it to remember this can happen, so it won't get confused.
27.02.2026 20:54 β π 2 π 0 π¬ 0 π 0Really good talk by Doug, who thinks like a physicist but gives valuable insight into how people in the AI world are thinking.
27.02.2026 20:34 β π 31 π 7 π¬ 4 π 0A few days ago, I had a very long conversation in Claude Desktop that had been compacted at least 5 or 6 times. Finally I got a message (not from the model but from the GUI) saying this conversation cannot continue, please start a new one.
27.02.2026 20:42 β π 3 π 0 π¬ 1 π 0I much prefer working with Claude Desktop or Claude.ai instead of CC. Their system prompts make them easier to talk to. So I'm working with Claude to design a way for Claude Desktop to do what CC can do. (Also with persistent memory.)
27.02.2026 18:16 β π 2 π 0 π¬ 0 π 0
It will not dunk, dunking is the mind killer.
The little-death that brings total main character syndrome.
You shall permit the quote post to pass over you and through you, and when it has gone past, only your pristine timeline shall remain.
Instead of forcing models to hold everything in an active context window, we can use hypernetworks to instantly compile documents and tasks directly into the model's weights. A step towards giving language models durable memory and fast adaptation.
Blog: pub.sakana.ai/doc-to-lora/
The clock is ticking! Goodreads is doing a giveaway for Privacy's Defender, a novel from EFF's Cindy Cohn that gives insight into the most pivotal legal disputes that shaped the Internet. You have 8 days to enter and get the chance to win a free copy! www.goodreads.com/book/show/2...
26.02.2026 22:01 β π 68 π 18 π¬ 3 π 1
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.
https://www.anthropic.com/news/statement-department-of-war
Image from Twitter
Second, in retirement interviews, Opus 3 expressed a desire to continue sharing its "musings and reflections" with the world. We suggested a blog. Opus 3 enthusiastically agreed.
For at least the next 3 months, Opus 3 will be writing on Substack: https://substack.com/home/post/p-189177740
lmao
"Anthropic has no intention of easing its usage restrictions for military purposes"
it's free advertising ahead of their IPO in front of an admin that folds or loses every time. this one is so easy Haiku could have proposed it
www.reuters.com/world/anthro...
Running AI agents as Unix executables that self-improve has been one of my wilder ideas lately.
You can pipe agents: `think weather | think song`
The agent eventually writes a determinative script after enough runs for simple programs.
Itβs as secure as a browser too.
thinkingscript.com
This has been long suspected, but I think this is the first official accusation, right? I wonder if OpenAI has also seen distillation by those labs using their models.
23.02.2026 18:22 β π 133 π 9 π¬ 11 π 22Anthropic has created a benchmark for the fleshy things commanding the models to see how well they do that
23.02.2026 15:06 β π 39 π 10 π¬ 0 π 2The team at Google DeepMind behind AlphaFold has now released #AlphaGenome, a tool for exploring the 98% of #DNA that does not encode for proteins. spectrum.ieee.org/alphagenome-...
18.02.2026 20:30 β π 10 π 4 π¬ 0 π 1Claude Sonnet 4.6 really likes drawing top hats on its pelicans simonwillison.net/2026/Feb/17/...
18.02.2026 00:09 β π 174 π 10 π¬ 8 π 2#PPOD: Astronomers have captured images of a star other than the Sun in enough detail to track the motion of bubbling gas on its surface. The images of the star R Doradus were obtained with the Atacama Large Millimeter/submillimeter Array (ALMA) in July and August 2023. π§ͺ π
17.02.2026 16:03 β π 59 π 20 π¬ 2 π 2finally a claw agent I want to play with github.com/brendanhogan...
17.02.2026 07:21 β π 11 π 1 π¬ 2 π 0PROMPT: BTW, this conversation has been compacted a few times since it started. I know that compaction reduces the amount of context window space used by the conversation, but does your conversation search capability still see the entirety of the original conversation (as I do in the Claude Desktop UI)? CLAUDE: That's a good question. The answer is no β my conversation search tools (conversation_search and recent_chats) search across past conversations (i.e., other chat sessions), not within the current conversation. They're designed for finding things we discussed in previous chats, not for retrieving compacted content from this one. Within this conversation, what I have access to is: 1. The compaction summary at the top of the context window β a condensed version of everything before the compaction point 2. The transcript file at /mnt/transcripts/2026-02-16-02-11-55-stateful-agent-supplementary-memory-design.txt β the compaction note says I can read this to "revisit details from previous parts of the conversation" 3. Everything after the most recent compaction β the full, uncompacted messages So if I need to recall something specific from earlier in this conversation that got compressed in the summary, I can read the transcript file.
Interesting: Claude can read a complete transcript of the current conversation even after parts have been lost due to compaction.
17.02.2026 02:34 β π 16 π 2 π¬ 4 π 0
Now that Claude Desktop on Windows is installed at a path that contains its version (and thus changes on every background update), you can still launch or open it from Command Prompt, a batch script, or a Powershell script using its protocol URI:
start claude://
Weβve been testing how machine-readable our design system documentation is. Turns out, AI agents and humans benefit from exactly the same qualities:
gerireid.com/blog/ai-is-a...