The Slow Drift Nobody Talks About
If you've been using AI coding tools for a while, you probably started with good intentions. Carefully worded prompts. Clear context. Scoped instructions.
And then, somewhere around week three, you started typing one-liners.
I know, because I did the same thing. Over the past year and a half of building with GitHub Copilot, Cursor, and Claude Code, I've watched my prompting discipline erode in real time — and I've paid for it in confused AI outputs, unwanted edits, and debugging sessions that should never have happened.
The problem isn't that you're lazy. The problem is that the habit of writing good prompts has too much friction. So when you're tired, or in flow, or just trying to get something done fast — you shortcut it. Every time.
What I've learned is that the solution isn't discipline. It's designing habits that are low enough friction that you actually do them.
Here are the two that made the biggest difference for me.
Habit 1: Make the AI Ask Questions First
When I'm vague — which is often — I add a single sentence to the end of my prompt:
"Ask me any clarifying questions before you start."
That's it. Nothing magical. But what it does is force a short alignment checkpoint before any code gets written. Instead of the AI charging ahead and making five assumptions I didn't want, it surfaces those assumptions as questions first.
The result? I spend 30 seconds answering questions upfront instead of 20 minutes undoing confident mistakes later.
Most modern AI tools support this naturally. Cursor, Claude, Copilot Chat — they'll all respond with clarifying questions if you ask. You don't need a special mode or workflow. Just add the line.
When to use it: Any time your prompt is a single sentence, or you're working on something that touches multiple files, involves architecture decisions, or could go in several valid directions.
Habit 2: Always Get a Plan Before Code
This one has probably saved me more tokens and more time than anything else.
Before writing a single line of code, I ask the AI for a detailed implementation plan — usually as a markdown document. I want it to walk me through:
- What files will be created or modified
- The overall approach and key decisions
- Any edge cases or tradeoffs worth thinking about
- The order of operations
Reading a plan takes 2 minutes. Reviewing and correcting a plan takes another 2. But going back and forth trying to fix code that went in the wrong direction? That can eat an hour.
The plan also gives you something to react to. It's much easier to say "actually, skip step 3, we already have that" than to catch a wrong assumption buried inside 80 lines of generated code.
Cursor users: Agent mode has built-in planning support. Use it every time. Don't skip it just because you're in a hurry.
Why Most Prompting Advice Doesn't Stick
You've probably read the articles. Be specific. Add context. Give examples. Use chain-of-thought. All true. All useful. None of it survives contact with a real deadline.
The issue is that most prompting advice optimizes for the perfect prompt — as if you have unlimited time and mental bandwidth every time you open a chat window. You don't.
What actually works is reducing the cost of the right habit. One sentence to trigger questions. One request for a plan before code. These aren't perfect, but they're consistent — and consistency beats perfection every time.
The Uncomfortable Truth
There is no magic prompt formula. The developers I've seen use AI tools most effectively aren't the ones with the fanciest system prompts or elaborate workflows. They're the ones who've built a small set of reliable defaults they actually use.
Start with these two. Adapt them to your workflow. And resist the urge to over-engineer it — the simpler the habit, the more likely it is to stick.
What's your go-to habit when prompting AI tools? I'd love to hear what's actually worked for you.