Diffbot's Avatar

Diffbot

@diffbot.bsky.social

AI that finds facts diffbot.com

20 Followers  |  9 Following  |  22 Posts  |  Joined: 10.12.2024  |  1.7595

Latest posts by diffbot.bsky.social on Bluesky


Post image

Zoom is the new Webex.

9 minutes after the meeting was supposed to start. Zoom Rooms won't connect. Locked out despite never logging in.

05.02.2026 02:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Between scant repo examples, FastMCP's irritating vector resemblance to FastAPI, and 180 degree overhauls on every MCP spec release, it's impossible to vibe code your way to a working server.

13.10.2025 23:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Hiring for engineers? Instead of a leetcode interview, try asking them to build an #MCP server in Python.

This first turn on Claude Opus 4.1 is so wrong it might just take burning the rest of humanity's natural gas reserves to fix it.

13.10.2025 23:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of Diffbot LLM grounding its output to primary sources of knowledge related to this query such as Emojipedia.

Screenshot of Diffbot LLM grounding its output to primary sources of knowledge related to this query such as Emojipedia.

The solution is to reinforce the use of knowledge tool calls for every query in post-training. By consistently grounding responses to citable sources, even the occasional quirk and hallucination are explainable.

09.10.2025 01:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This phenomenon can sneak into production environments in unobvious ways. If there are enough token predictions pointing to the right answer, it's all too easy to skip the tool call and generate a structured response that still validates schemas.

09.10.2025 01:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

It's fascinating to see the Mandela Effect take hold on LLMs as much as it does people.

This is especially true for LLMs that see its own pretrained memory as a tool for knowledge recall, rather than as an orchestrator (most LLMs).

09.10.2025 01:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - diffbot/diffbot-llm-inference: DIffbot LLM Inference Server DIffbot LLM Inference Server. Contribute to diffbot/diffbot-llm-inference development by creating an account on GitHub.

Check out the repo for more info:

github.com/diffbot/diff...

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
diffbot/Llama-3.1-Diffbot-Small-2412 Β· Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

89,886 developers are building their own Perplexity on-prem with Diffbot LLM β€”

huggingface.co/diffbot/Llam...

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The model isn't the moat. Perplexity can be recreated as a side project. #DeepSeek proved this. We proved this.

Download Diffbot LLM. Run it off your own GPU. Congrats, your on-prem #AI is smarter than #Perplexity.

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2. We used the profits from our primary business to train Diffbot LLM. Perplexity raised $915M to train theirs.

3. We open sourced Diffbot LLM. Perplexity chose to keep theirs secret.

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Let's be frank β€” The score difference is insignificant. And we'll probably play SimpleQA tag for awhile.

What IS significant is how we got here vs. Perplexity.

1. Diffbot LLM is a side project. Sonar is Perplexity's entire business.

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

...so I set it up to run the 4000 question eval on Diffbot LLM overnight and went to bed.

The next morning, we beat Sonar Pro.

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

While working on my talk last week, Perplexity released Sonar Pro API with a special emphasis on its factuality benchmark F1 score of 0.858, handily beating other internet connected LLMs like Gemini-2.0-flash.

The SimpleQA benchmark they used is open source and LLM judged...

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

#Perplexity Sonar Pro API launched last week as the best performing model on factuality.

24 hours later, it's the 2nd best performing model (and it's not because of #DeepSeek).

Why? πŸ‘‡

30.01.2025 03:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Diffy Chat

A demo is also available at diffy.chat.

We look forward to building a future of grounded AI with you all.

09.01.2025 21:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - diffbot/diffbot-llm-inference: DIffbot LLM Inference Server DIffbot LLM Inference Server. Contribute to diffbot/diffbot-llm-inference development by creating an account on GitHub.

Diffbot LLM's lighter footprint puts on-prem hosting well within reach.

And we are excited to share that we are releasing Diffbot LLM open source on #Github, with weights available for download on #Huggingface.

github.com/diffbot/diff...

09.01.2025 21:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

At Diffbot, we believe that general purpose reasoning will eventually be distilled down to ~1B parameters.

Knowledge is best retrieved at inference, outside of model weights.

09.01.2025 21:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

The benefit of full source attribution goes two ways.

Not only is credit provided to publishers, every fact is also independently verifiable.

09.01.2025 21:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of Diffbot LLM's response to the query "What does unflavored pea protein taste like?"

Screenshot of Diffbot LLM's response to the query "What does unflavored pea protein taste like?"

Every response from Diffbot LLM draws from the results of real-time expert web searching and queries to the Diffbot Knowledge Graph.

Naturally, this means Diffbot LLM always provides full attribution to its cited sources.

09.01.2025 21:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We launched the world's most grounded #LLM β€” Diffbot #GraphRAG LLM.

Instead of training on ever larger corpuses of data, Diffbot LLM is trained to be an expert web researcher.

In fact, Diffbot LLM makes zero assumptions about its knowledge of the world.

09.01.2025 21:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Where are the #AI starter packs at?

10.12.2024 21:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hello World.

10.12.2024 18:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@diffbot is following 8 prominent accounts