Jonathan Ross's Avatar

Jonathan Ross

@jonathan-ross.bsky.social

CEO + Founder @ Groq, the Most Popular API for Fast Inference | Creator of the TPU and LPU, Two of the World’s Most Important AI Chips | On a Mission to Double the World's AI Compute by 2027

969 Followers  |  16 Following  |  42 Posts  |  Joined: 23.11.2024  |  1.8676

Latest posts by jonathan-ross.bsky.social on Bluesky

Preview
Groq CEO: Our mission is to provide over half of the world’s inference compute Jonathan Ross, CEO and founder of Groq, joins CNBC’s 'Squawk on the Street' to discuss the AI chip startup’s $750 million funding round, its push to deliver faster, lower-cost inference chips, and why...

Fantastic insight on the massive demand for AI inference infrastructure “The demand for AI compute is insatiable” @groq.com CEO @jonathan-ross.bsky.social, “Our mission is to provide over half of the world’s inference compute” - @cnbc.com

cnb.cx/4nG7Pcm #AI

25.09.2025 12:46 — 👍 4    🔁 3    💬 0    📌 0

Founder Tip #2: You have to spend time to make time.

Hiring, re-organizing, calendar clean up (across the team), preparation for meetings (internal and external), etc. Half my day is available for whatever I find important - because the other half is spent freeing up time.

06.09.2025 16:52 — 👍 1    🔁 0    💬 0    📌 0

Clearly China doesn't have enough compute for scaled AI today:
- GPT-OSS, Llama [US]: optimized for cheaper inference
- R1, Kimi K2, Qwen [China]: optimized for cheaper training

With China's population reducing inference costs is more important, and that means more training.

19.08.2025 12:19 — 👍 0    🔁 0    💬 0    📌 0
Post image

Transcribe audio with @groq.com.

16.04.2025 14:10 — 👍 5    🔁 1    💬 1    📌 0

I spent the weekend hanging out with a group of friends. A question we asked was what dreams did we have that we gave up on?

When I was 18, I had two dreams:

1) Be an astronaut
2) Build AI chips

I didn’t give up on one of them. 😀

24.03.2025 14:39 — 👍 2    🔁 0    💬 0    📌 0
Preview
Mistral Saba Added to GroqCloud™ Model Suite - Groq is Fast AI Inference GroqCloud™ has added another openly-available model to our suite – Mistral Saba. Mistral Saba is Mistral AI’s first specialized regional language model,

Big news! Mistral AI Saba 24B is on GroqCloud! The specialized regional language model is perfect for Middle East and South Asia-based devs and enterprises building AI solutions that need fast inference.
Learn more: groq.com/mistral-saba...

27.02.2025 17:04 — 👍 7    🔁 1    💬 1    📌 0
Jonathan Ross, Founder & CEO @ Groq: NVIDIA vs Groq - The Future of Training vs Inference | E1260
YouTube video by 20VC with Harry Stebbings Jonathan Ross, Founder & CEO @ Groq: NVIDIA vs Groq - The Future of Training vs Inference | E1260

YouTube: www.youtube.com/watch?v=xBMR...

Spotify: open.spotify.com/episode/30np...

Try Groq: console.groq.com

17.02.2025 18:00 — 👍 6    🔁 1    💬 1    📌 0
Video thumbnail

It was a pleasure being back on 20VC with Harry Stebbings. His craft of interviewing is second to none and we went deep.

This is the interview after we just launched 19,000 LPUs in Saudi Arabia. We built the largest inference cluster in the region.

Link to the interview in the comments below!

17.02.2025 18:00 — 👍 65    🔁 7    💬 4    📌 0
Post image

We built the region’s largest inference cluster in Saudi Arabia in 51 days and we just announced a $1.5B agreement for Groq to expand our advanced LPU-based AI inference infrastructure.

Build fast.

09.02.2025 22:41 — 👍 7    🔁 1    💬 0    📌 0
Preview
a close up of a man 's face with the words inconceivable written on it . ALT: a close up of a man 's face with the words inconceivable written on it .
01.02.2025 18:00 — 👍 2    🔁 0    💬 0    📌 0
Video thumbnail

My emergency episode with @harrystebbings.bsky.social at 20VC just launched on the impact of #DeepSeek on the AI world

29.01.2025 16:41 — 👍 156    🔁 31    💬 7    📌 2
Post image

Yesterday at the World Economic Forum in Davos, I joined a constructive discussion on AGI alongside @andrewyng.bsky.social, @yejinchoinka.bsky.social, @jonathan-ross.bsky.social , @thomwolf.bsky.social and moderator @nxthompson.bsky.social. Full discussion here: www.weforum.org/meetings/wor...

23.01.2025 17:01 — 👍 46    🔁 6    💬 1    📌 1
Post image 13.01.2025 16:14 — 👍 14    🔁 2    💬 0    📌 0

Thank you! 🙏

09.01.2025 03:29 — 👍 3    🔁 0    💬 0    📌 0

Over the next decade, we want to drive the cost down for generative AI 1,000x making a lot more activities profitable. And we think that that will cause a 100x spend increase.

🧵(5/5)

08.01.2025 16:03 — 👍 8    🔁 0    💬 0    📌 0

Over the last 60 years, almost like clockwork, every decade compute gets about 1000x cheaper, people buy 100,000x as much of it, spending 100x times more overall. 

Our mission at Groq is to drive the cost of compute towards zero;The cheaper we make compute the more people spend.

🧵(4/5)

08.01.2025 16:03 — 👍 5    🔁 0    💬 1    📌 0

- The answer is when you make a steam engine more efficient, it reduces the OpEx;

- When you reduce the OpEx, it increases the number of activities that are profitable;

- Therefore, people will do more things using steam engines and coal demand rises.

The same paradox applies to compute.

🧵(3/5)

08.01.2025 16:03 — 👍 2    🔁 0    💬 1    📌 0

It’s a paradox because if they're more efficient, why are they buying more coal?

🧵(2/5)

08.01.2025 16:03 — 👍 1    🔁 0    💬 1    📌 0
Post image

When you make compute cheaper do people buy more?

Yes. It's called Jevons Paradox and it's a big part of our business thesis.

In the 1860s, an Englishman wrote a treatise on coal where he noted that every time steam engines got more efficient people bought more coal.

🧵(1/5)

08.01.2025 16:03 — 👍 9    🔁 1    💬 1    📌 0
Post image

This is insane, Groq is the #4 API on this list! 😮

OpenAI, Anthropic, and Azure are the top 3 LLM API providers on LangChain

Groq is #4, and close behind Azure

Google, Amazon, Mistral, and Hugging Face are the next 4.

Ollama is for local development.

Now add three more 747's worth of LPUs 😁

07.01.2025 16:05 — 👍 17    🔁 2    💬 1    📌 0
2025 Predictions with bestie Gavin Baker
YouTube video by All-In Podcast 2025 Predictions with bestie Gavin Baker

www.youtube.com/watch?v=HxNU...

05.01.2025 00:09 — 👍 1    🔁 0    💬 0    📌 0

Groq just got a shout out on the All-In pod as one of the big winners for 2025 alongside Nvidia. It’s the year of the AI chip and ours is the fastest 😃

05.01.2025 00:09 — 👍 5    🔁 0    💬 1    📌 0
Video thumbnail

Welcome to Shipmas - Groq Style.

Groq's second B747 this week. How many LPUs and GroqRacks can we load into a jumbo jet? Take a look.

Have you been naughty or nice?

24.12.2024 15:44 — 👍 12    🔁 1    💬 0    📌 0
Post image Post image Post image

Santa rented two full 747s this week to make his holiday deliveries of GroqRacks. Ho ho ho! 🎅

23.12.2024 17:47 — 👍 18    🔁 2    💬 2    📌 1

(5/5) Learning: product-led growth works; even when your product is too large and expensive to let people have it for free, you just have to be more creative about it.

10.12.2024 15:45 — 👍 2    🔁 0    💬 0    📌 0

(4/5) We're not shipping anyone millions of dollars of hardware.
It’s not a big ask for them to try it.
And when they try it, they love it.

10.12.2024 15:45 — 👍 2    🔁 0    💬 1    📌 0

(3/5) By making Groq easy and low cost to try, we got 60,000 developers on our developer console in 30 days. Less than a year after that, and we're at 645,000 developers and growing.

10.12.2024 15:45 — 👍 1    🔁 0    💬 2    📌 0

(2/5) That makes it almost impossible to do counterintuitive things. Like, “Try this new chip called an LPU", when everything in the zeitgeist is talking about GPUs.

And if you're a startup? Forget it.

That realization is why we made the strategic decision to put up our own cloud.

10.12.2024 15:45 — 👍 0    🔁 0    💬 1    📌 0
Post image

(1/5) One of the reasons why chips are so hard to innovate in is because if you're asking someone to put up a 10 million, 100 million, or a billion dollar check they need to know that what they're buying is going to work.

10.12.2024 15:45 — 👍 7    🔁 1    💬 1    📌 0

(5/5) The new Llama-3.3-70B model launched and is now available to all 645,000 GroqCloud™ developers as of this morning. Go cook, and don't forget to share what you build here.

Thank you for making GroqCloud™ the #1 API for fast inference! This is only just the beginning.

06.12.2024 18:11 — 👍 4    🔁 1    💬 0    📌 0

@jonathan-ross is following 14 prominent accounts