Drewβs post is well worth reading as DSPy seems to be a missing link in thinking about LLM usage. Very readable and interesting. www.dbreunig.com/2025/06/10/l...
Thank you @simonwillison.net
@lateinteraction.bsky.social
Incoming asst professor at MIT EECS, Fall 2025. Research scientist at Databricks. CS PhD @StanfordNLP.bsky.social. Author of ColBERT.ai & DSPy.ai.
Drewβs post is well worth reading as DSPy seems to be a missing link in thinking about LLM usage. Very readable and interesting. www.dbreunig.com/2025/06/10/l...
Thank you @simonwillison.net
If you've been trying to figure out DSPy - the automatic prompt optimization system - this talk by @dbreunig.bsky.social is the clearest explanation I've seen yet, with a very useful real-world case study www.youtube.com/watch?v=I9Zt...
My notes here: simonwillison.net/2025/Oct/4/d...
#pydatabos interesting! How the Arbor library works under the hood hand in hand with DSPy
15.10.2025 23:42 β π 3 π 1 π¬ 0 π 0premature optimization is the sqrt of all evil
29.10.2025 16:26 β π 3 π 0 π¬ 0 π 0#pydatabos one line motivation for using DSPy!
15.10.2025 23:56 β π 3 π 1 π¬ 0 π 1Stop what you are doing and try out GEPA now!
"GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning" presents such elegant ideas by a collection of amazing researchers!
Here is a tldr of how it works:
Btw thereβs no trouble in storage at all either.
ColBERT vectors are often 10 bytes each. Ten bytes. Thatβs like 4 numbers.
Itβs not βmany vectors work better than one vectorβ. Itβs βset similarity works better than dot productβ.
Even with the same storage cost.
A diagram illustrating a dual-encoder retrieval model using MaxSim scoring. β’ On the left (green box): labeled βQuery Encoder, f_Qβ. It takes a Query as input and produces multiple vector embeddings (rectangles). β’ On the right (blue box): labeled βDocument Encoder, f_Dβ. It takes a Document as input and produces multiple vector embeddings (rectangles). This block is marked with βOffline Indexingβ along the side, showing that documents are pre-encoded. β’ Between the two encoders: dotted and solid arrows connect query embeddings to document embeddings, representing similarity comparisons. β’ Each comparison goes through a βMaxSimβ operation (highlighted boxes), which selects the maximum similarity for each query token across document tokens. β’ At the top: outputs of MaxSim flow into a summation node (Ξ£) to produce a single score for ranking. This shows the ColBERT (Contextualized Late Interaction) retrieval framework: query and document are encoded separately, interactions are computed via maximum similarity per query token, and results are aggregated into a score.
colbert-muvera-micro a 4M(!!) late interaction model
late interaction models do embedding vector index queries and reranking at the same time leading to far higher accuracy
huggingface.co/NeuML/colber...
Let the Model Write the Prompt | Drew Breunig #dspy #promptengineering #llms #generativeai
19.06.2025 15:54 β π 2 π 1 π¬ 0 π 0Here's the write up of my Data+AI Summit talk on the perils of prompts in code and how to mitigate them with DSPy. www.dbreunig.com/2025/06/10/l...
15.06.2025 16:57 β π 4 π 1 π¬ 1 π 0Have you heard the news? #MLflow now supports tracking for DSPy optimization workflowsβjust like it does for #PyTorch training!
Keep reading to see what this means for your #LLM projectsβ¦ π
#opensource #dspy #oss
π£ TODAY at 4PM PT - MLflow Community Meetup!
π Register today π lu.ma/mlflow423
Join the global MLflow community for two exciting tech deep dives:
πΉ MLflow + #DSPy Integration
πΉ Cleanlab + #MLflow
π₯ Streaming live on YouTube, LinkedIn, and X
π¬ Live Q&A with the presenters
#opensource #oss
MLflow now supports tracking for #DSPy (Community) optimization β just like it does for @pytorch.org training! π
#MLflow is the first to bring full visibility into DSPyβs prompt optimization process. More observability, less guesswork.
Get started today! β‘οΈ medium.com/@AI-on-Datab...
#opensource
Join us for the next MLflow Community Meetup β Wednesday, April 23 at 4PM PT! ποΈ
πΉ Explore the new MLflow + #DSPy integration
πΉ Learn how Cleanlab adds trust to AI workflows with MLflow
π¬ Live Q&A + demos
πΊ Streamed on YouTube, LinkedIn, and X
π RSVP: lu.ma/mlflow423
#opensource #mlflow #oss
Nice work! For history:
dspy.ai/api/primitiv...
This was built by a long-time DSPy community member!
04.03.2025 00:34 β π 4 π 0 π¬ 2 π 0Yes there's an evals crisis, but evaluating *models* is not even the right question most of the time
LangProBe from Shangyin Tan, @lakshyaaagrawal.bsky.social, Arnav Singhvi, Liheng Lai, @michaelryan207.bsky.social et al begins to ask what complete *AI systems* we should build & under what settings
π§΅Introducing LangProBe: the first benchmark testing where and how composing LLMs into language programs affects cost-quality tradeoffs!
We find that, on avg across diverse tasks, smaller models within optimized programs beat calls to larger models at a fraction of the cost.
It doesn't help that the we in ML often only design abstractions leak all kinds of implementation details. Folks often define ML itself in terms of techniques, not problems!
But it's prematurely abstracting that leads to the bitterness of wasted effort, and not "modularity doesn't work for AI". 2/2
Composition & abstraction are the foundations of CS, but are clearly absent in modern ML.
It's not that they're not crucial for intelligent software. But it takes building many half-working systems to abstract successfully, and it takes good abstractions to have primitives worth composing.
π§΅1/2
4) By default, IR methods that use "multiple vectors" (e.g., cross-encoders) are unscalable. It seems like a necessary tradeoff, but the fascinating thing in late interaction is that it's easy to implement in asymptotically sub-linear ways, thanks to pruning.
Hope this was useful!
3) "Multi-vector" makes it sound like these approaches win because they store "more stuff".
But that's not true: if you look at how aggressive ColBERTv2 representations are compressed, it's often ~20 bytes per vector (like 5 floats), which can be smaller than popular uncompressed single vectors!
For dot products, every time you "fix" one query--document pair, you likely break so many other pairs by moving the query and/or document representations.
For ColBERT, you typically *fix* more than you break because you're moving *tokens* in a much smaller (and far more composable!) space.
The problem isn't the vector representation, it's the **learnability of the scoring function**.
A dot product is just very hard to learn. An intuition I learned from Menon et al (2021) is that:
2) More importantly, there's nothing to say you can't store a TON of information in a single vector. And it's easy to use multiple vectors and gain *zero* improvement over a single-vector, e.g. if you replace MaxSim with AvgSim in ColBERT, without any other changes, it doesn't work!
26.02.2025 19:14 β π 1 π 0 π¬ 1 π 01) If you take ColBERT and force it to use only a constant number of vectors (e.g., 16), it'll barely outperform one vector in the general case.
It's not that you need token-level alignment per se (you don't either!) but you want fine-grained representations, not just *multiple* representations.
Some quick thoughts: On why we gave the ColBERT paradigm the name "late interaction" instead of "multi-vector", a term that emerged later and that has proven to be more intuitive.
**The mechanism is actually not about having multiple vectors at all.** You can see this in four different ways.
π§΅1/7
Btw the full general form to export all message templates is:
```
{name: my_adapter.format(p.signature, demos=p.demos, inputs={k: f'{{{k}}}' for k in p.signature.input_fields}) for name, p in your_program.named_predictors()}
```
The default Adapter is dspy.ChatAdapter().
But you can do all customization you mentioned with a custom Adapter:
class MyAdapter(dspy.Adapter):
def format(self, signature, demos, inputs):
return {"role": "user", "content": ...}
def parse(self, signature, completion):
return {....}