Why AI Makes Me Think More, Not Less
When implementation becomes trivial, thinking becomes everything
I keep hearing the same worry: AI is going to make developers dumber, because it means they don't have to think as much. And honestly? If that matched my experience, I'd be sounding the alarm too. I have no interest in tools that atrophy my skills.
But here's what's actually happening: I'm thinking more intensely than ever before. The speedbumps are gone, but the thinking? That's cranked up to 11.
What I have always loved about programming is deep thinking about the problem at hand and the best way to solve it. If AI took away the part where I think, I'd hate it. But it's doing the opposite—and because of that, somehow I now have a way to love this work even more than I did before.
The Value of Thinking Before Coding
Every great programmer I've known shares one trait: they think deeply before they code. Not sometimes. Every time they're solving a new problem, working in unfamiliar territory, or doing something tricky.
I've tried explaining this value for years, usually to skeptical audiences. At Jane, we introduced RFCs to handle scaling teams. Some teams embraced it, tailored it, and now ship incredibly fast. Others still ask: "Why can't I just start hacking?"
Here's what I noticed about the teams where it clicked: they turned software design into a collaborative activity. They work through problems together—business needs, existing constraints, maintainability concerns. This takes time, sure, hours spread over days. But by the time they start coding, they're flying.
One engineer told me: "After implementation planning, writing the code feels like a formality."
That quote stuck with me. Because here's the thing: When I use AI, every coding session feels like I'm on a high-performing team.
Building On What We Know
I keep coming back to this principle when developing AI workflows: how do we solve this problem today with humans? AI changes the game significantly, but it doesn't rewrite the rules. We've spent decades learning these patterns—don't throw that knowledge away, adapt it.
When a good engineer picks up a new ticket, they start with discovery. They pull out a notebook, jot down initial thoughts and questions after reading the ticket. They poke around the codebase, ask questions. Take more notes, hunt for answers. It's messy but systematic.
Once satisfied with discovery, they create an implementation plan. Maybe formal checkboxes, maybe bullet points, maybe paragraphs—the format doesn't matter as much as the thinking.
During implementation, sometimes everything goes to plan. Great! But when it doesn't, here's where great engineers separate themselves: they immediately revisit their planning artifacts. They page everything back into their minds and adapt quickly to what they missed. Any time they may have lost from doing that upfront planning is massively made up here. When they have to roll with the punches, they do it quickly, easily, and well.
This is the pattern. Discovery → Planning → Implementation → Adaptation. And its important to understand, this is not going to feel faster at the start, it will feel faster at the end. The performance compounds, with quality going up.
Here's the thing: AI doesn't break this pattern—it amplifies every step. And the same compounding effect not only exists, it gets magnified.
AI Tools Change The Math
I've been advocating for planning for years. But with modern AI agents, planning gets easier and faster, while its compounding value goes up dramatically.
I've built and refined a set of prompts over months of practice. They're more sophisticated, they account for a lot more than what I'm sharing today, and I'm not ready to publish them quite yet. But through this blog, you'll learn the principles to build your own. My prompts are great, but just like an old greybeard vimmer with their .vimrc, your prompts should be tailored to you and precisely how you want to work.
Know When to Plan (And When Not To)
Before we dive in—not everything needs a formal plan. Fixing a typo? Just fix it. Tweaking button colors? Get it done. But when you're building features, refactoring systems, or touching anything that could break in interesting ways? I would never do those things without at least a simple plan up front.
The Opening Move
I always start the same way: "Here's what I'm trying to accomplish, here's how I want to work together. Sound good, or do you have suggestions?"
For planning, it goes something like:
"I'm building a plan for [feature]. I don't want you to write code. I want you to help me work through the plan—give suggestions, feedback, show what I'm missing, help validate ideas, and align with the project's goals and architecture. After I give context, we'll work through the implementation plan together. The final output will be a markdown file at current/PLAN.md with all details in a format an LLM agent can understand."
Setting the collaboration pattern upfront is crucial. AI doesn't know how you want to work—telling it explicitly is one of the most helpful things you can do. If you are not used to collaboration patterns with AI yet, it may even have some helpful suggestions to refine the process too.
The Power of PLAN.md
After planning, you have an artifact. Remember, AI is doing statistical pattern matching—clear, structured context is everything. Hierarchy matters. Examples matter even more. If you describe a pattern, show what exists now and what you want after.
This artifact becomes your reference book. Working with AI, you build it faster and often better than working alone. Now you can add it to any session context and instantly get your agent up to speed. Bad path? Git reset, adjust the plan, try again (drastic, but sometimes a clean slate beats untangling a mess). Because once you've done the thinking, getting the code down really is just a formality.
There's More
Notice the `current` directory? There's a reason. PLAN.md isn't your only artifact, and honestly, good planning considers far more than we've covered today. But this is the start—and the plan is the heart of getting great results from these tools.
The Thinking Paradox
So here we are. The robots that were supposed to make us dumber are making me think harder than ever. The tools that were supposed to replace thinking have amplified it instead.
I spend more time planning now. Its not in a notebook that much anymore, its in discussion with a highly engaged, intelligent, and tireless collaborative partner. More time considering architecture tradeoffs. More time considering risk. More time asking "what if?" Because when implementation is cheap, thinking becomes the differentiator. When code generation is instant, planning becomes the bottleneck. When syntax is solved, problem-solving is all that's left.
And honestly? This is what I signed up for twenty years ago. Not to be a typist, but to be a thinker. Not to wrestle with syntax, but to solve problems. Not to fight my tools, but to build something meaningful.
AI didn't take away the part of programming I love. It gave me more of it.
This isn't everything. We havent talked about what happens after the plan—how to break down work so AI can help you fly through implementation while maintaining your standards. Because thinking more doesn't mean typing forever.
The future isn't about writing less code. It's about solving harder problems.
And that is something I can live with.