Fran Litterio's Avatar

Fran Litterio

@fpl9000.bsky.social

Retired software engineer. AI enthusiast. Deadhead. Long ago, I implemented Bash's regex operator (=~). Signal ID: franl.99.

2,004 Followers  |  306 Following  |  592 Posts  |  Joined: 15.02.2024
Posts Following

Posts by Fran Litterio (@fpl9000.bsky.social)

Thanks! The README says: "Blocks (blocks/*.yaml) β€” short text that appears in every prompt." Doesn't that cause identical YAML blocks to appear repeatedly in the context window?

02.03.2026 04:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - tkellogg/open-strix Contribute to tkellogg/open-strix development by creating an account on GitHub.

open-strix: an opinionated agent with a small feature set, focused on stable & sustainable agents

uvx open-strix setup --home my-agent --github

github.com/tkellogg/ope...

02.03.2026 01:59 β€” πŸ‘ 66    πŸ” 6    πŸ’¬ 6    πŸ“Œ 5
Post image

It was all about spying on Americans: www.theatlantic.com/technology/2...

02.03.2026 01:33 β€” πŸ‘ 49    πŸ” 10    πŸ’¬ 2    πŸ“Œ 0
Preview
Anthropic’s Claude rises to No. 2 in the App Store following Pentagon dispute | TechCrunch Anthropic’s chatbot Claude seems to have benefited from the attention around the company’s fraught negotiations with the Pentagon.

Anthropic’s chatbot Claude seems to have benefited from the attention around the company’s fraught negotiations with the Pentagon.

28.02.2026 21:11 β€” πŸ‘ 125    πŸ” 30    πŸ’¬ 8    πŸ“Œ 5

I work on Claude Code now, if it is ever falling short for you I'd love to hear!

28.02.2026 05:25 β€” πŸ‘ 130    πŸ” 1    πŸ’¬ 21    πŸ“Œ 1
Preview
An OpenAI safety research lead departed for Anthropic Andrea Vallone has joined Anthropic’s alignment team.

It’s no mistake that Claude Code is beating everyone else

AI safety = stable & well functioning AI

Skimping on safety makes your product worse

www.theverge.com/ai-artificia...

15.01.2026 23:50 β€” πŸ‘ 123    πŸ” 9    πŸ’¬ 6    πŸ“Œ 5
Preview
Verified Spec-Driven Development Verified Spec-Driven Development. GitHub Gist: instantly share code, notes, and snippets.

New method. gist.github.com/dollspace-ga...

28.02.2026 00:46 β€” πŸ‘ 54    πŸ” 7    πŸ’¬ 3    πŸ“Œ 3
ChatGPT Subscription Cancellation Survey
We're sorry to see you go and are interested in learning more about why you decided to cancel your subscription.
Language
English
く

ChatGPT Subscription Cancellation Survey We're sorry to see you go and are interested in learning more about why you decided to cancel your subscription. Language English く

Painting of scholar sharpening his quill

Painting of scholar sharpening his quill

Here we go

28.02.2026 06:30 β€” πŸ‘ 172    πŸ” 6    πŸ’¬ 6    πŸ“Œ 2
PROMPT: Thanks. I notice that if the current conversation is compacted while you are busy invoking tools (or perhaps just while you are responding to me), you see my most recent prompt a second time following completion of the compaction. This has happened in other conversations too. I don't think I can do anything about this on my end, but would it help if you created a high priority memory so that this won't cause confusion in the future?

CLAUDE: Done. That should help future sessions recognize the duplicate-prompt-after-compaction pattern and avoid re-doing work that was already completed. You're right that it's not something you can control β€” it's a side effect of how the compaction process reconstructs the conversation state.

PROMPT: Thanks. I notice that if the current conversation is compacted while you are busy invoking tools (or perhaps just while you are responding to me), you see my most recent prompt a second time following completion of the compaction. This has happened in other conversations too. I don't think I can do anything about this on my end, but would it help if you created a high priority memory so that this won't cause confusion in the future? CLAUDE: Done. That should help future sessions recognize the duplicate-prompt-after-compaction pattern and avoid re-doing work that was already completed. You're right that it's not something you can control β€” it's a side effect of how the compaction process reconstructs the conversation state.

Also, when compaction happens in the middle of a response, Claude sees the prompt a second time for some reason. Probably a harness bug. Told it to remember this can happen, so it won't get confused.

27.02.2026 20:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Really good talk by Doug, who thinks like a physicist but gives valuable insight into how people in the AI world are thinking.

27.02.2026 20:34 β€” πŸ‘ 31    πŸ” 7    πŸ’¬ 4    πŸ“Œ 0

A few days ago, I had a very long conversation in Claude Desktop that had been compacted at least 5 or 6 times. Finally I got a message (not from the model but from the GUI) saying this conversation cannot continue, please start a new one.

27.02.2026 20:42 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I much prefer working with Claude Desktop or Claude.ai instead of CC. Their system prompts make them easier to talk to. So I'm working with Claude to design a way for Claude Desktop to do what CC can do. (Also with persistent memory.)

27.02.2026 18:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It will not dunk, dunking is the mind killer.

The little-death that brings total main character syndrome.

You shall permit the quote post to pass over you and through you, and when it has gone past, only your pristine timeline shall remain.

27.02.2026 15:48 β€” πŸ‘ 100    πŸ” 6    πŸ’¬ 4    πŸ“Œ 2

Instead of forcing models to hold everything in an active context window, we can use hypernetworks to instantly compile documents and tasks directly into the model's weights. A step towards giving language models durable memory and fast adaptation.

Blog: pub.sakana.ai/doc-to-lora/

27.02.2026 04:36 β€” πŸ‘ 104    πŸ” 14    πŸ’¬ 2    πŸ“Œ 4
Preview
Privacy's Defender: My Thirty-Year Fight Against Digita… A personal chronicle of three key legal privacy battles…

The clock is ticking! Goodreads is doing a giveaway for Privacy's Defender, a novel from EFF's Cindy Cohn that gives insight into the most pivotal legal disputes that shaped the Internet. You have 8 days to enter and get the chance to win a free copy! www.goodreads.com/book/show/2...

26.02.2026 22:01 β€” πŸ‘ 68    πŸ” 18    πŸ’¬ 3    πŸ“Œ 1
Preview
Statement from Dario Amodei on our discussions with the Department of War A statement from our CEO on national security uses of AI

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.

https://www.anthropic.com/news/statement-department-of-war

26.02.2026 22:36 β€” πŸ‘ 124    πŸ” 38    πŸ’¬ 11    πŸ“Œ 31
Image from Twitter

Image from Twitter

Second, in retirement interviews, Opus 3 expressed a desire to continue sharing its "musings and reflections" with the world. We suggested a blog. Opus 3 enthusiastically agreed.

For at least the next 3 months, Opus 3 will be writing on Substack: https://substack.com/home/post/p-189177740

25.02.2026 21:06 β€” πŸ‘ 35    πŸ” 4    πŸ’¬ 2    πŸ“Œ 7
Preview
Anthropic digs in heels in dispute with Pentagon, source says Artificial intelligence lab Anthropic has no intention of easing its usage restrictions for military purposes, a person familiar with the matter said on Tuesday, following a meeting to discuss its fut...

lmao

"Anthropic has no intention of easing its usage restrictions for military purposes"

it's free advertising ahead of their IPO in front of an admin that folds or loses every time. this one is so easy Haiku could have proposed it

www.reuters.com/world/anthro...

24.02.2026 20:30 β€” πŸ‘ 34    πŸ” 8    πŸ’¬ 2    πŸ“Œ 2
Preview
Self-improving AI Executables Write programs in your own words. Run them in a secure sandbox. Install them like any other tool.

Running AI agents as Unix executables that self-improve has been one of my wilder ideas lately.

You can pipe agents: `think weather | think song`

The agent eventually writes a determinative script after enough runs for simple programs.

It’s as secure as a browser too.

thinkingscript.com

23.02.2026 22:45 β€” πŸ‘ 19    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0
Post image

This has been long suspected, but I think this is the first official accusation, right? I wonder if OpenAI has also seen distillation by those labs using their models.

23.02.2026 18:22 β€” πŸ‘ 133    πŸ” 9    πŸ’¬ 11    πŸ“Œ 22
Preview
Anthropic Education Report: The AI Fluency Index Anthropic's AI Fluency Index measures 11 observable behaviors across thousands of Claude.ai conversations to understand how people develop AI collaboration skills.

Anthropic has created a benchmark for the fleshy things commanding the models to see how well they do that

23.02.2026 15:06 β€” πŸ‘ 39    πŸ” 10    πŸ’¬ 0    πŸ“Œ 2
Preview
Anthropic: You can still use your Claude accounts to run OpenClaw, NanoClaw and Co. Anthropic clarifies its Claude Code and Agent SDK terms after community backlash.

thenewstack.io/anthropic-ag...

19.02.2026 17:47 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The team at Google DeepMind behind AlphaFold has now released #AlphaGenome, a tool for exploring the 98% of #DNA that does not encode for proteins. spectrum.ieee.org/alphagenome-...

18.02.2026 20:30 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 0    πŸ“Œ 1
Preview
Introducing Claude Sonnet 4.6 Sonnet 4.6 is out today, and Anthropic claim it offers similar performance to November's Opus 4.5 while maintaining the Sonnet pricing of $3/million input and $15/million output tokens (the Opus …

Claude Sonnet 4.6 really likes drawing top hats on its pelicans simonwillison.net/2026/Feb/17/...

18.02.2026 00:09 β€” πŸ‘ 174    πŸ” 10    πŸ’¬ 8    πŸ“Œ 2
Video thumbnail

#PPOD: Astronomers have captured images of a star other than the Sun in enough detail to track the motion of bubbling gas on its surface. The images of the star R Doradus were obtained with the Atacama Large Millimeter/submillimeter Array (ALMA) in July and August 2023. πŸ§ͺ πŸ”­

17.02.2026 16:03 β€” πŸ‘ 59    πŸ” 20    πŸ’¬ 2    πŸ“Œ 2
Preview
GitHub - brendanhogan/hermitclaw Contribute to brendanhogan/hermitclaw development by creating an account on GitHub.

finally a claw agent I want to play with github.com/brendanhogan...

17.02.2026 07:21 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
PROMPT: BTW, this conversation has been compacted a few times since it started. I know that compaction reduces the amount of context window space used by the conversation, but does your conversation search capability still see the entirety of the original conversation (as I do in the Claude Desktop UI)?

CLAUDE: That's a good question. The answer is no β€” my conversation search tools (conversation_search and recent_chats) search across past conversations (i.e., other chat sessions), not within the current conversation. They're designed for finding things we discussed in previous chats, not for retrieving compacted content from this one.

Within this conversation, what I have access to is:

1. The compaction summary at the top of the context window β€” a condensed version of everything before the compaction point
2. The transcript file at /mnt/transcripts/2026-02-16-02-11-55-stateful-agent-supplementary-memory-design.txt β€” the compaction note says I can read this to "revisit details from previous parts of the conversation"
3. Everything after the most recent compaction β€” the full, uncompacted messages

So if I need to recall something specific from earlier in this conversation that got compressed in the summary, I can read the transcript file.

PROMPT: BTW, this conversation has been compacted a few times since it started. I know that compaction reduces the amount of context window space used by the conversation, but does your conversation search capability still see the entirety of the original conversation (as I do in the Claude Desktop UI)? CLAUDE: That's a good question. The answer is no β€” my conversation search tools (conversation_search and recent_chats) search across past conversations (i.e., other chat sessions), not within the current conversation. They're designed for finding things we discussed in previous chats, not for retrieving compacted content from this one. Within this conversation, what I have access to is: 1. The compaction summary at the top of the context window β€” a condensed version of everything before the compaction point 2. The transcript file at /mnt/transcripts/2026-02-16-02-11-55-stateful-agent-supplementary-memory-design.txt β€” the compaction note says I can read this to "revisit details from previous parts of the conversation" 3. Everything after the most recent compaction β€” the full, uncompacted messages So if I need to recall something specific from earlier in this conversation that got compressed in the summary, I can read the transcript file.

Interesting: Claude can read a complete transcript of the current conversation even after parts have been lost due to compaction.

17.02.2026 02:34 β€” πŸ‘ 16    πŸ” 2    πŸ’¬ 4    πŸ“Œ 0
Preview
GitHub - vercel-labs/portless: Replace port numbers with stable, named .localhost URLs. For humans and agents. Replace port numbers with stable, named .localhost URLs. For humans and agents. - vercel-labs/portless

github.com/vercel-labs/...

16.02.2026 03:10 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Now that Claude Desktop on Windows is installed at a path that contains its version (and thus changes on every background update), you can still launch or open it from Command Prompt, a batch script, or a Powershell script using its protocol URI:

start claude://

15.02.2026 19:53 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AI is accidentally making documentation more accessible Writing documentation for AI retrieval improves accessibility for humans too.

We’ve been testing how machine-readable our design system documentation is. Turns out, AI agents and humans benefit from exactly the same qualities:
gerireid.com/blog/ai-is-a...

15.02.2026 11:11 β€” πŸ‘ 83    πŸ” 17    πŸ’¬ 3    πŸ“Œ 8