The top 3 requests?
๐ง "error" (35%)
โ
"test" (21%)
โจ "improve" (18%)
AI isnโt starting from a blank file.
Itโs jumping into messy code and making sense of it.
The real value? Debugging, refining, and unblocking.
@augmentcode.com.bsky.social
The Developer AI that deeply understands your codebase and how your team builds software. Augment puts your teamโs collective knowledge at your fingertips.
The top 3 requests?
๐ง "error" (35%)
โ
"test" (21%)
โจ "improve" (18%)
AI isnโt starting from a blank file.
Itโs jumping into messy code and making sense of it.
The real value? Debugging, refining, and unblocking.
Most people think devs use AI to write code from scratch.
But we analyzed 81 million developer chats โ and thatโs not whatโs happening.
Here's what we found ๐
Agent prompting = engineering communication.
Donโt think โprompt engineering.โ Think โdesign doc + task breakdown + pair programming.โ
Good prompts are good collaboration.
Ask for a plan before action.
โI need to expose time zone settings. First, suggest a planโdonโt write code yet.โ
This gives you control. And gives the Agent a checkpoint to align.
Donโt cram it all in at once.
โ โRead ticket, build UI, write tests, update docsโ
โ
Break into steps:
- Read ticket
- Build UI
- Write tests
- Update docs
Let the Agent finish before moving on.
Point the Agent to the right files.
โ โAdd JSON parser to chat backendโ
โ
โAdd JSON parser in LLMOutputParsing (services/ folder). Itโll be used to extract structured output from chat completions.โ
Precision = performance.
Give references to code, tests, or docs.
โ โWrite tests for ImageProcessorโ
โ
โWrite tests for ImageProcessor.Follow structure in test_text_processor.pyโ
The Agent learns better by example.
Include why, not just what.
โ โUse events instead of direct method callsโ
โ
โReviewers flagged tight coupling in SettingsWebviewPanel.statusUpdate(). Letโs refactor to events to improve modularity.โ
Reasoning aligns the Agent with your intent.
โ โFix bug in login handlerโ
โ
โLogin fails with 500 on incorrect passwords. Repro: call /api/auth with wrong creds. Check auth_service.py. Add test if possible.โ
Agents need context like humans do.
Most Agent failures arenโt about bad models.
Theyโre about bad prompts.
Hereโs how to write prompts that actually workโbased on thousands of real dev-Agent interactions ๐๐งต
Thanks for the feedback, Michael - agree we were overzealous here. We'll go ahead and delete all your info so you don't hear from us again.
11.07.2025 04:12 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Ready to make your agent work the way you do? Create a .augment/rules/ folder in your repository and start customizing.
๐ www.augmentcode.com/changelog/in...
Already using .augment-guidelines.md?
No changes requiredโyour setup remains supported.
But Augment Rules offers even greater flexibility and control.
Three flexible ways to use Rules:
1๏ธโฃ Always: Attach rules to every query automatically
2๏ธโฃ Manual: Select rules per query as needed
3๏ธโฃ Auto: Describe your taskโthe agent intelligently selects the most relevant rules
Get started in seconds:
๐ง Smart Rule Selection: Agent Requested mode finds whatโs relevant for each task
๐ Seamless Migration: Import rules from other tools, or use your existing Augment guidelines
๐งฉ Flexible Organization: Use any file name or structure to match your workflow
Every project, team, and workflow is unique.
Augment Rules empower you to specify exactly how your agent should behave. Simply add instruction files to .augment/rules/ and your agent will adapt.
With Augment Rules, your software agent can build just like your team does.
09.07.2025 21:21 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 1Stay in-session. Build context.
Correct it like you would a teammate:
โThis is closeโjust fix the null case.โ
โLeave the rest as-is.โ
Youโll be surprised how far a few nudges go.
Failure is feedback.
It tells you:
โ What the Agent misunderstood
โ What you didnโt explain
โ What to clarify next
Donโt bailโrefine.
Let it write and run tests.
Then iterate:
โTests failedโwhat went wrong?โ
โFix the off-by-one error in test 3.โ
โRerun and confirm.โ
Quick cycles beat careful guesses every time.
The best Agent workflows look like test-driven development:
Write โ run โ fix โ rerun.
Youโre not aiming for a perfect promptโyou're building momentum.
Prompt.
Wait.
It messes up.
Start over?
Not if you build a feedback loop.
Hereโs how to make Agents actually useful ๐
PS: Thatโs why we built Prompt Enhancer โit auto-pulls context from your codebase and rewrites your prompt for clarity.
Available now in VS Code & JetBrains: just click โจ and ship better prompts, faster.
TL;DR:
๐ ๏ธ Tools help
๐ File paths help
๐ง Rationale helps
๐ Examples help
Agents donโt need perfect prompts.
They need complete ones.
Examples work tooโbut only if theyโre scoped:
โ โImplement tests for ImageProcessor.โ
โ
โImplement tests for ImageProcessor. Follow the pattern in text_processor.py.โ
Now it knows what โgoodโ looks like.
A high-context prompt answers the silent questions:
- Where should I look?
- What else is relevant?
- Are there examples I can follow?
- Whatโs the user really trying to do?
Agents donโt ask.
So you have to pre-answer.
Hereโs what a low-context prompt looks like: โEnable JSON parser for chat backend.โ
Sounds fine, right?
Until the Agent:
- Picks the wrong file
- Misses the LLMOutputParsing class
- Uses a config that doesnโt exist
Because you didnโt ground it.
Prompting isnโt about being clever.
Itโs about giving the Agent everything it needs to not guess.
That means:
- Clear goals
- Relevant files
- Helpful examples
- Precise constraints
Agents donโt hallucinate randomlyโthey hallucinate from missing context.
Most Agent failures arenโt model problems. Theyโre context problems.
If you give vague or incomplete info, the Agent will fill in the blanksโand usually get them wrong.
Hereโs how to write high-context prompts that actually work ๐
Guess now we can predict Asia's lunch time ๐
02.07.2025 16:14 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0