Check out our blog post: www.inceptionlabs.ai/news
26.02.2025 20:51 β π 5 π 0 π¬ 0 π 0@inceptionlabs.bsky.social
Pioneering a new generation of LLMs.
Check out our blog post: www.inceptionlabs.ai/news
26.02.2025 20:51 β π 5 π 0 π¬ 0 π 0Try Mercury Coder on our playground at chat.inceptionlabs.ai
26.02.2025 20:51 β π 3 π 0 π¬ 3 π 0On Copilot Arena, developers consistently prefer Mercuryβs generations. It ranks #1 on speed and #2 on quality. Mercury is the fastest code LLM on the market
26.02.2025 20:51 β π 1 π 0 π¬ 1 π 0We achieve over 1000 tokens/second on NVIDIA H100s. Blazing fast generations without specialized chips!
26.02.2025 20:51 β π 1 π 0 π¬ 1 π 0Mercury Coder diffusion large language models match the performance of frontier speed-optimized models like GPT-4o Mini and Claude 3.5 Haiku while running up to 10x faster.
26.02.2025 20:51 β π 2 π 0 π¬ 1 π 0We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.
26.02.2025 20:51 β π 26 π 7 π¬ 3 π 6A new generation of LLMs . . . coming soon . . .
25.02.2025 00:28 β π 6 π 0 π¬ 0 π 0