Mike Ulin's Avatar

Mike Ulin

@michaelulin.bsky.social

CTO @ Paxton AI

77 Followers  |  96 Following  |  4 Posts  |  Joined: 03.05.2023  |  1.1782

Latest posts by michaelulin.bsky.social on Bluesky

Preview
Beyond Code Generation: Using Large Language Models Inside Your Functions The real unlock isn’t just generating code—it’s replacing entire functions with AI-powered intelligence

Been experimenting a lot with AI coding tools (Lovable, Cursor, etc.) and had a bit of a realization. While code generation saves time, the more profound shift might be using LLMs as functional components within our code.

Plunging inference costs make it viable to replace chunks of complex, brittl

21.04.2025 14:00 — 👍 0    🔁 0    💬 0    📌 0
Preview
The Moby Dick Theory of Big Companies Examining Marc Andreessen's classic essay on when startups need to interact with a larger firm

Selling to large enterprises can feel like chasing Moby Dick—uncertain, risky, and potentially game-changing.

I've navigated enterprise sales at RPX, ZestyAI, and Paxton, and each experience reinforced Marc Andreessen's brilliant insight from his essay, "The Moby Dick Theory of Big Companies": L

14.04.2025 14:00 — 👍 1    🔁 0    💬 0    📌 0
Preview
There’s More to DeepSeek Than You Think Is DeepSeek Actually a Technological Breakthrough—or Just on the Standard Cost Curve?

Just published a new blog post on DeepSeek—the buzzy open-source AI project that’s been making waves (and headlines) lately. But is it really the groundbreaking innovation everyone says it is?

pioneeringthoughts.substack.com/p/theres-mor...

03.02.2025 14:32 — 👍 1    🔁 1    💬 0    📌 0
Preview
There’s More to DeepSeek Than You Think Is DeepSeek Actually a Technological Breakthrough—or Just on the Standard Cost Curve?

Just published a new blog post on DeepSeek—the buzzy open-source AI project that’s been making waves (and headlines) lately. But is it really the groundbreaking innovation everyone says it is?

pioneeringthoughts.substack.com/p/theres-mor...

03.02.2025 14:32 — 👍 1    🔁 1    💬 0    📌 0
Preview
Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs Introducing MPT-7B, the latest entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch. For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, the last of which uses a context length of 65k tokens!

Open source LLMs are getting really interesting- Mosaic just released models with a 65k context window :) https://www.mosaicml.com/blog/mpt-7b

07.05.2023 14:09 — 👍 3    🔁 0    💬 0    📌 0

@michaelulin is following 20 prominent accounts