Luke Marsden's Avatar

Luke Marsden

@lmarsden.bsky.social

Hacker & entrepreneur. Founder helix.ml, private GenAI stack, getting business value out of local open source LLMs

2,456 Followers  |  4,518 Following  |  81 Posts  |  Joined: 17.04.2023  |  2.0069

Latest posts by lmarsden.bsky.social on Bluesky

Preview
Bootstrapped Private GenAI Startup Hits $1M Annual Revenue, Launches Helix 2.0 The people behind the story, how agentic AI is changing and why we don't want a sales call with you

Bootstrapped a Private GenAI Startup to $1M revenue, AMA

blog.helix.ml/p/bootstrapp...

01.08.2025 17:46 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks Hannah! πŸ’–

03.04.2025 07:26 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Good morning #KubeCon! Come find me if you want to talk running LLM systems like vision RAG in production on Kubernetes today. Email luke@helix.ml or reply here!

03.04.2025 07:25 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Phil setting up a vLLM vision model provider on helix on his nerd phone on the Elizabeth line #kubecon

02.04.2025 13:53 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I mean, yes please πŸ˜„

02.04.2025 08:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Catch my talk with Priya Samuel on running LLMs in production on k8s today at #KubeCon at S10A at 3:15pm ✨

02.04.2025 07:56 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Trying (but failing) to be as colourful as @hannahfoxwell.net today!

02.04.2025 07:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Post image

Oh hello there BlueSky! We've arrived here just in time to share all the highlights of #KubeCon #CloudNativeCon in sunny London!

Don't miss @hannahfoxwell.net and @lmarsden.bsky.social at 15:20 today talking about Platform Engineering and Developer Experience For Your On Prem LLM

01.04.2025 12:41 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
What Is an AI Native Developer? We explain what an AI native developer is, how the role will evolve, and why spec-driven AI development is driving this trend.

I looked to define a term that’s being thrown around β€” #AI-native developer β€” with the help of @guypo.com @lmarsden.bsky.social and Thoughtworks’ Mike Mason so you can prepare your software teams for the near future.
thenewstack.io/what-is-an-a...
What’s your definition? Only on @thenewstack.io

20.02.2025 17:47 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Based on my experience of riding the crazy horse that is cursor agent mode, spec-driven development is much needed!

20.02.2025 18:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
You should run local models: run Deepseek-R1 on Helix 1.5.0 now Chinese Hedge Fund crashes US stock market by releasing Deepseek-R1, and what this means for your enterprise GenAI strategy

blog.helix.ml/p/you-should...

30.01.2025 10:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Deepseek are clever fuckers. I wrote this about how Deepseek is pushing decision makers in large financial institutions to seriously consider running their own models instead of calling out to Microsoft, Amazon & Google

Link below πŸ‘‡

30.01.2025 10:01 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Post image

Speaker @lmarsden.bsky.social of MLOps Consulting will be in our SOOCon25 AI Openness track! 🎀 Join to hear about Production-Ready LLMs on Kubernetes: Patterns, Pitfalls, and Performance.πŸ’‘https://buff.ly/3HjeQxq
#opensource #opensourceai #ai #stateofopencon #soocon25 #opensourcelondon #openuk

15.01.2025 17:00 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
The cover of Ted Chiang's Exhalation. It shows a pair of lungs made up by plants and gears/cogs.

The cover of Ted Chiang's Exhalation. It shows a pair of lungs made up by plants and gears/cogs.

I enjoyed Stories of Your Life and Others so much that I'm moving straight to Chiang's other collection: Exhalation.

12.01.2025 07:59 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 3    πŸ“Œ 0

MS Teams in the UK this morning is the most grim experience imaginable, tons of jitter and bandwidth seems fucked - like trying to do a video call over a highly contended LTE connection - my network connection is fine (per mtr) - is there an outage in some MS data center somewhere?

08.01.2025 11:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Release 1.4.12 - apps drag'n'drop filestore, initial MCP support Β· helixml/helix What's Changed make frontend work better with filestore knowledge by @nessie993 in #668 Model Context Protocol (MCP) support by @nessie993 in #680 security/refactoring/quality changes: Revive vo...

We shipped a lot over Christmas and I came here to release it:

github.com/helixml/heli...

- You can now drag'n'drop files directly into knowledge for Helix apps (rather than having to go via the filestore).

- Initial support for MCP (more on this coming soon!)

06.01.2025 16:19 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Post image Post image

Cancelled my X subscription over all the stupid political interference

03.01.2025 20:50 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Running GenAI on Supercomputers: Bridging HPC and Modern AI Infrastructure Transform Your Supercomputer into a Private OpenAI: A Look at How Modern HPC Infrastructure Can Power Enterprise AI

Thank you to @dciangot.com for doing the heavy lifting getting HelixML GPU runners running on Slurm HPC infra to take advantage of hundreds of thousands of GPUs running on Slurm infrastructure and transform them into multi-tenant GenAI systems!

Read all about it here: blog.helix.ml/p/running-ge...

20.12.2024 11:00 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This is so awesome! Imagine being able to combine the worlds of cloud-native scalable web services for LLMOps with the raw power of Slurm-powered supercomputers with some of the biggest compute power, networking and GPUs! Check out the writeup here: blog.helix.ml/p/running-ge...

20.12.2024 10:58 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GPTScript Helix Apps: For Fun and Profit Making GPTScript Shine with Open Source LLMs: How Llama 3's 70B Model Finally Makes Natural Language Programming Reliable

So @cybernetist.com got gptscript working with llama3.3:70b! Check out the detailed writeup here: blog.helix.ml/p/gptscript-...

19.12.2024 17:25 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Private GenAI Platform – HelixML Build, test and deploy GenAI apps. Connect knowledge and business applications via APIs. Deploy in your cloud or use ours

Got helix.ml running GenAI on a supercomputer..

13.12.2024 19:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You're a dope llama hacker (ha ha try finding a gif for that)

11.12.2024 19:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Release 1.4.9 - support function calling in helix runners, add gptscript runner Β· helixml/helix What's Changed Add gptscript_runner service to helix stack by @milosgajdos in #635 Bump ollama and update LLM inference with Tool calling by @milosgajdos in #637 Full Changelog: 1.4.8...1.4.9

Fucking dope helix.ml release from @cybernetist.com if you like function calling in open source LLMs and @GPTScript_ai github.com/helixml/heli...

(Excuse my French)

11.12.2024 19:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Preview
Building Reliable GenAI Applications: A Hands-on Testing & CI Workshop Recap and a walkthrough video of the Testing & CI for GenAI Workshop we ran yesterday. Join the next one!

Here's the post with the youtube video walkthrough of the workshop in it: blog.helix.ml/p/building-r...

04.12.2024 14:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Test Driven Development (TDD) for your LLMs? Yes please, more of that please!

Back to basics - write a test, see the test fail, improve the prompt, see it pass, check it in - just like you would with any other code πŸ˜„

04.12.2024 14:41 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 2    πŸ“Œ 2

Luke, this post, including the video, was very helpful. I loved the conference talk. I took my graduate students through a somewhat similar exercise last week for their last lab of the semester. Well done. Thank you.

28.11.2024 10:58 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This is a wonderful post by @lmarsden.bsky.social Great presentation and video as well.

28.11.2024 11:00 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
We can all be AI engineers – and we can do it with open source models The barriers to AI engineering are crumbling fast

ICYMI, here's my 2-part blog series on how we can all be AI engineers with open source models – and how to apply testing best practices to LLM apps

1. blog.helix.ml/p/we-can-all...
2. blog.helix.ml/p/from-click...

28.11.2024 09:49 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

The reference architecture above uses option (a) for development and the whole stack runs on kind on your laptop

And you're right without GPUs you lose fine tuning and also image models. But you can still do knowledge, integrating the LLM with APIs via OpenAPI spec and tests

24.11.2024 08:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes, if you don't want to run your own GPUs, you can a) use an external LLM provider like togetherai (but then you lose the benefits of total data privacy within the cluster) or b) use ollama with CPU inference, but it's fairly slow unless you have lots of cores and use a smaller model

24.11.2024 08:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@lmarsden is following 20 prominent accounts