Zoom is the new Webex.
9 minutes after the meeting was supposed to start. Zoom Rooms won't connect. Locked out despite never logging in.
@diffbot.bsky.social
AI that finds facts diffbot.com
Zoom is the new Webex.
9 minutes after the meeting was supposed to start. Zoom Rooms won't connect. Locked out despite never logging in.
Between scant repo examples, FastMCP's irritating vector resemblance to FastAPI, and 180 degree overhauls on every MCP spec release, it's impossible to vibe code your way to a working server.
13.10.2025 23:05 β π 0 π 0 π¬ 0 π 0Hiring for engineers? Instead of a leetcode interview, try asking them to build an #MCP server in Python.
This first turn on Claude Opus 4.1 is so wrong it might just take burning the rest of humanity's natural gas reserves to fix it.
Screenshot of Diffbot LLM grounding its output to primary sources of knowledge related to this query such as Emojipedia.
The solution is to reinforce the use of knowledge tool calls for every query in post-training. By consistently grounding responses to citable sources, even the occasional quirk and hallucination are explainable.
09.10.2025 01:56 β π 0 π 0 π¬ 0 π 0This phenomenon can sneak into production environments in unobvious ways. If there are enough token predictions pointing to the right answer, it's all too easy to skip the tool call and generate a structured response that still validates schemas.
09.10.2025 01:56 β π 0 π 0 π¬ 1 π 0It's fascinating to see the Mandela Effect take hold on LLMs as much as it does people.
This is especially true for LLMs that see its own pretrained memory as a tool for knowledge recall, rather than as an orchestrator (most LLMs).
Check out the repo for more info:
github.com/diffbot/diff...
89,886 developers are building their own Perplexity on-prem with Diffbot LLM β
huggingface.co/diffbot/Llam...
The model isn't the moat. Perplexity can be recreated as a side project. #DeepSeek proved this. We proved this.
Download Diffbot LLM. Run it off your own GPU. Congrats, your on-prem #AI is smarter than #Perplexity.
2. We used the profits from our primary business to train Diffbot LLM. Perplexity raised $915M to train theirs.
3. We open sourced Diffbot LLM. Perplexity chose to keep theirs secret.
Let's be frank β The score difference is insignificant. And we'll probably play SimpleQA tag for awhile.
What IS significant is how we got here vs. Perplexity.
1. Diffbot LLM is a side project. Sonar is Perplexity's entire business.
...so I set it up to run the 4000 question eval on Diffbot LLM overnight and went to bed.
The next morning, we beat Sonar Pro.
While working on my talk last week, Perplexity released Sonar Pro API with a special emphasis on its factuality benchmark F1 score of 0.858, handily beating other internet connected LLMs like Gemini-2.0-flash.
The SimpleQA benchmark they used is open source and LLM judged...
#Perplexity Sonar Pro API launched last week as the best performing model on factuality.
24 hours later, it's the 2nd best performing model (and it's not because of #DeepSeek).
Why? π
A demo is also available at diffy.chat.
We look forward to building a future of grounded AI with you all.
Diffbot LLM's lighter footprint puts on-prem hosting well within reach.
And we are excited to share that we are releasing Diffbot LLM open source on #Github, with weights available for download on #Huggingface.
github.com/diffbot/diff...
At Diffbot, we believe that general purpose reasoning will eventually be distilled down to ~1B parameters.
Knowledge is best retrieved at inference, outside of model weights.
The benefit of full source attribution goes two ways.
Not only is credit provided to publishers, every fact is also independently verifiable.
Screenshot of Diffbot LLM's response to the query "What does unflavored pea protein taste like?"
Every response from Diffbot LLM draws from the results of real-time expert web searching and queries to the Diffbot Knowledge Graph.
Naturally, this means Diffbot LLM always provides full attribution to its cited sources.
We launched the world's most grounded #LLM β Diffbot #GraphRAG LLM.
Instead of training on ever larger corpuses of data, Diffbot LLM is trained to be an expert web researcher.
In fact, Diffbot LLM makes zero assumptions about its knowledge of the world.
Where are the #AI starter packs at?
10.12.2024 21:50 β π 0 π 0 π¬ 0 π 0Hello World.
10.12.2024 18:45 β π 1 π 0 π¬ 0 π 0