5/5: Full analysis: www.cyberchitta.cc/articles/os-...
Try vibe-gain: github.com/cyberchitta/...
Your AI coding boost? Reply below!
@restlessronin.bsky.social
सत्यमेव जयते 𝘷𝘪𝘯𝘤𝘪𝘵 𝘰𝘮𝘯𝘪𝘢 𝘷𝘦𝘳𝘪𝘵𝘢𝘴 https://www.cyberchitta.cc
5/5: Full analysis: www.cyberchitta.cc/articles/os-...
Try vibe-gain: github.com/cyberchitta/...
Your AI coding boost? Reply below!
4/5: How? More sessions/day (1-2→3-4), faster iteration (gaps <5min: 38%→45%). Complex projects became doable.
All via chat AI (Claude/Grok + llm-context)—no IDE tools like Cursor.
#DevTools
Pre AI commit activity timeline
Recent AI commit activity timeline
3/5: Data: Active days +46% (99→145), commits/day doubled (3→6). Deeper work: repos with 100+ commits up (1→4).
See the shift:
#AIProductivity
2/5: Pre-AI: Bursty commits, big gaps. With AI: Steady output, 2-2.5x median LOC/day (225→532).
AI made any time slot productive—brief or long, switching tasks felt instant.
#AIDev #OpenSource
🧵 1/5: 🚀 New post: "Vibe Gain: How AI Unlocked Hidden Coding Time"
GitHub data shows my coding shift from sporadic bursts to daily flow—2-2.5x gains via AI. No extra hours, just frictionless sessions.
Thread: Metrics, charts, insights 👇
#AI #Coding #Productivity
I added evals for gemini-2.5-pro-exp-03-25 and deepseek-chat-v3-0324. tldr; gemini-2.5 one-shotted it. deepseek-3 two-shotted it.
bottom line. i think my eval has been saturated now. I likely will not test any more models on it.
So I built scrapling-fetch-mcp! A simple URL fetcher that helps LLMs access websites protected by bot-detection.
GitHub: github.com/cyberchitta/...
Felt weird that Claude couldn't fetch a URL (using mcp-fetch) that I had open in a browser window right next to it. Even weirder that I found an MCP tool for this purpose, but needed to get an API key to use it.
14.03.2025 12:30 — 👍 0 🔁 0 💬 1 📌 0I just added evals for grok-3-beta and grok-3-beta-think. grok-3-beta-think hit the top-tier, grok-3-beta not so much.
21.02.2025 07:37 — 👍 0 🔁 0 💬 0 📌 0Added evals for o3-mini and o3-mini-high
01.02.2025 10:33 — 👍 0 🔁 0 💬 0 📌 0Added evaluation for gemini-2.0-flash-thinking-exp-01-21
22.01.2025 16:53 — 👍 0 🔁 0 💬 0 📌 0Added evaluation for deepseek-r1
21.01.2025 14:28 — 👍 1 🔁 0 💬 0 📌 0Added evaluation for minimax-text-01
16.01.2025 08:37 — 👍 0 🔁 0 💬 0 📌 0Added evaluation for deepseek-v3
28.12.2024 06:33 — 👍 0 🔁 0 💬 0 📌 0Added evaluation for gemini-exp-1206
12.12.2024 14:23 — 👍 0 🔁 0 💬 0 📌 0Added evaluation for gemini-2.0-flash-exp
11.12.2024 18:33 — 👍 0 🔁 0 💬 0 📌 0📝 28 Alternatives to LLM Context
We found 28 different tools for sharing code with LLMs - and built one more! Check out our roundup of open-source alternatives to LLM Context, and see how developers everywhere are tackling this challenge: www.cyberchitta.cc/articles/lc-...
Added evaluation for QwQ-32b-preview
08.12.2024 06:42 — 👍 0 🔁 0 💬 0 📌 0Added evaluations for llama-3.3-70b-instruct and nova-pro-1.0
07.12.2024 16:10 — 👍 0 🔁 0 💬 0 📌 0Added evaluation for o1
07.12.2024 14:03 — 👍 0 🔁 0 💬 0 📌 0📢 llm-context v0.2.0 is out!
✨ Major feature: Full Claude Desktop integration via MCP
- Seamlessly explore codebases right in your conversations
- Profile-based content filtering
- Intelligent context selection
🔗 github.com/cyberchitta/llm-context.py
📦 uv tool install llm-context
🚀 Introducing LLM Context - a tool to seamlessly integrate AI chat into your coding workflow. Discover how it enhances development through efficient context management and workflow integration. Check it out:
🔗 www.cyberchitta.cc/articles/llm...
Added evaluations for #qwen-2.5-coder-32b-instruct and #gemini-exp-1121
27.11.2024 06:59 — 👍 0 🔁 0 💬 0 📌 0I just added evaluations for #qwen-2.5-72b-instruct and #nemotron-70b-instruct
25.11.2024 09:48 — 👍 0 🔁 0 💬 0 📌 0Smaller LLMs beat GPT-4 at Shopify code refactoring! 🤯
Claude-Haiku & Mistral outperformed industry giants in our 14-model comparison.
Full analysis: cyberchitta.cc/articles/llm...
#deepseek-r1-lite-preview added