Thinking about two different bets on the future of AI interaction: structured vs unstructured. Code interpreter and Cursor vs Operator and Dia.
29.01.2025 16:45 β π 0 π 0 π¬ 0 π 0@cguo.bsky.social
Thinking about two different bets on the future of AI interaction: structured vs unstructured. Code interpreter and Cursor vs Operator and Dia.
29.01.2025 16:45 β π 0 π 0 π¬ 0 π 0Your codebase is no longer a moat.
Your product design is no longer a moat.
β
Your domain expertise is still a moat (for now).
Your network effects are still a moat.
Your data is still a moat.
β
The software landscape is going to change a lot in the next 3-5 years.
Eventually you'll stop counting the words. You'll stop worrying about what others think. You'll stop wondering if you're "a writer" yet.
You'll just write.
Get feedback, if it helps you. If not: fuck em. You're the one writing.
Write your first 2000 word post. Then your first 5000 word post. Write two posts in a week. Write every day.
Find something that inspires you. Something you have Opinionsβ’ about. Read, if you can. It helps. But don't forget to write.
Maybe consider editing your posts. Or don't. Just keep writing. Keep hitting the publish button. Get to 1000 words a week.
Write. Just write. Then hit publish. Even if it's bad. Even if all you can do is 100 words. Start there.
Then do it again, the next week. Add 5 words the week after that. Get to 500 words a week. Make them 1% better. Then 1% better the week after that.
I hated English class in school (I was really bad at it, too).
In the past two years, I've written over 250,000 words on Substack. My biggest takeaway:
Wow. DeepSeek is number one in the app store today.
27.01.2025 00:48 β π 0 π 0 π¬ 0 π 0Waiting for AGI like
25.01.2025 16:45 β π 0 π 0 π¬ 0 π 0LangChain or LlamaIndex? CrewAI or Ell? Honestly, none of the above.
Right now, the best AI framework is no framework - there's too much risk of getting stuck in quicksand: www.ignorance.ai/p/ai-platfo...
ayyyy
23.01.2025 22:44 β π 0 π 0 π¬ 0 π 0Deepseek's new R1 model can gaslight itself into thinking there are only 2 r's in strawberry. We built a new form of intelligence and gave it all our existing neuroses.
23.01.2025 16:45 β π 1 π 0 π¬ 0 π 0β’ If youβre using documents or pasting large blocks of text, ask ChatGPT to cite specific quotes to justify its answers.
β’ And because itβs always worth repeating: ChatGPT will lie to you. Donβt ever blindly trust its output.
Give them a try, or share your best ChatGPT tips!
β’ If you donβt have a good idea of how to prompt the AI, turn things around - have ChatGPT ask you relevant questions before answering.
β’ Donβt be afraid to re-roll responses or go back and edit your message history. Branching conversations are an underused tool.
5 simple tips I would give anyone who wants to start being more effective with ChatGPT:
β’ Always give more context. It canβt read your mind, and you have way more implicit assumptions than you think.
This entire article was a wild ride, even for someone who isnβt shocked at the idea of human/AI relationships.
Itβs tempting to laugh, but instead Iβm asking myself what it reveals about deeper human needs for connection and validation in an increasingly isolated modern world.
Reminder to self about keeping up with AI:
It's okay to not read every paper, like every viral post, try every tool, and test every model.
The world will still be there tomorrow.
As I get older, I'm less afraid of code that breaks, and more afraid of code that works - but I have no idea why.
19.01.2025 16:45 β π 0 π 0 π¬ 0 π 0With o1 I now find myself reaching for custom browser JS and even Chrome extensions pretty frequently - something I rarely, if ever did before. LLMs are now good enough to one-shot a custom Chrome extension.
18.01.2025 16:45 β π 1 π 0 π¬ 0 π 0β’ Making connections with other brilliant content creators
β’ Exploring new models, mediums, and messages
β’ Consistently getting better at my craft
Everyone chases the first list. Itβs much more enjoyable chasing the second.
What I thought would make me happy:
β’ Getting to 10,000 (now almost 15,000) free subscribers
β’ Becoming a Bestseller with hundreds of paid subs
β’ Breaking into the top 100 on a leaderboard
What actually fills me with joy:
There's definitely still a risk of hallucinations, but Structured Outputs solves so many headaches when getting ChatGPT to generate JSON data.
Read more at wwwβ€ignoranceβ€ai
Instead, try Structured Outputs:
- Define the exact schema you want
- Get guaranteed structure every time
- Force specific field names
- Validate data types automatically
You don't have to:
- Beg the AI to "PLEASE RETURN VALID JSON"
- Use brittle regexes to parse unstructured text
- Add custom delimiters and pray for the best
- Create complex multi-step prompts
If you're building with ChatGPT and not using Structured Outputs, you absolutely should.
Building reliable AI apps used to be a nightmare of JSON parsing. I've 100% been guilty of prompting: "Please pretty please give me JSON? π₯Ί"
It's still so early, but my main takeaway is that model efficiency is exploding while prices are plummeting.
2025 is going to be wild.
5. OpenAI, of course, held its own with the 12 days of Shipmas:
- Full Sora model release
- Full o1 model release (plus o1-pro)
- 1-800-CHATGPT (yes, really)
- o3 model benchmarks
4. Google went all out across text, videos, and reasoning:
- Gemini-exp-1206 topped leaderboards
- Gemini 2.0 Flash brought streaming magic
- "Thinking mode" followed in o1's footsteps
- Veo 2 crashed OpenAI's Sora party
3. DeepSeek comes out of nowhere with its v3
- Matches GPT-4o/Claude 3.5 Sonnet
- Costs 10x less to train and beats top benchmarks
- Comes with open weights
2. Meta unleashed Llama 3.3 70B
- Matches performance of 405B models
- Now runs on a laptop (wild!)
- Hitting 2,200 tokens/second on Cerebras