Summarizer

Not Novel or Revolutionary

Many commenters argue this workflow is standard practice, not radically different. References to existing tools like Kiro, OpenSpec, SpecKit, and Antigravity that already implement spec-driven development. Claims the approach was documented 2+ years ago in Cursor forums.

← Back to How I use Claude Code: Separation of planning and execution

While many commenters appreciate the effort to document a structured AI workflow, the overwhelming consensus is that a "plan-first" approach is a standard engineering practice rather than a revolutionary discovery. Critics point out that this methodology—treating AI as an "energetic but unreliable intern" that requires strict oversight—is already a first-class feature in tools like Claude Code and Google’s Antigravity, and has been discussed in developer forums for years. Despite the debate over its novelty, supporters argue that the transition from "vibe coding" to rigorous, spec-driven development is a necessary "coming of age" for the industry. Ultimately, the community views this workflow as a return to fundamental software engineering principles where the hard work remains in the planning, not the typing.

46 comments tagged with this topic

View on HN · Topics
The author seems to think they've hit upon something revolutionary... They've actually hit upon something that several of us have evolved to naturally. LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes. So, how do you solve that? Exactly how an experienced lead or software manager does: you have systems write it down before executing, explain things back to you, and ground all of their thinking in the code and documentation, avoiding making assumptions about code after superficial review. When it was early ChatGPT, this meant function-level thinking and clearly described jobs. When it was Cline it meant cline rules files that forced writing architecture.md files and vibe-code.log histories, demanding grounding in research and code reading. Maybe nine months ago, another engineer said two things to me, less than a day apart: - "I don't understand why your clinerules file is so large. You have the LLM jumping through so many hoops and doing so much extra work. It's crazy." - The next morning: "It's basically like a lottery. I can't get the LLM to generate what I want reliably. I just have to settle for whatever it comes up with and then try again." These systems have to deal with minimal context, ambiguous guidance, and extreme isolation. Operate with a little empathy for the energetic interns, and they'll uncork levels of output worth fighting for. We're Software Managers now. For some of us, that's working out great.
View on HN · Topics
Revolutionary or not it was very nice of the author to make time and effort to share their workflow. For those starting out using Claude Code it gives a structured way to get things done bypassing the time/energy needed to “hit upon something that several of us have evolved to naturally”.
View on HN · Topics
It's this line that I'm bristling at: "...the workflow I’ve settled into is radically different from what most people do with AI coding tools..." Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity. It was #6 in Boris's run-down: https://news.ycombinator.com/item?id=46470017 So, yes, I'm glad that people write things out and share. But I'd prefer that they not lead with "hey folks, I have news: we should *slice* our bread!"
View on HN · Topics
I would say he’s saying “hey folks, I have news. We should slice our bread with a knife rather than the spoon that came with the bread.”
View on HN · Topics
This kind of flows have been documented in the wild for some time now. They started to pop up in the Cursor forums 2+ years ago... eg: https://github.com/johnpeterman72/CursorRIPER Personally I have been using a similar flow for almost 3 years now, tailored for my needs. Everybody who uses AI for coding eventually gravitates towards a similar pattern because it works quite well (for all IDEs, CLIs, TUIs)
View on HN · Topics
> I don’t think it’s that big a red flag anymore. It is to me, because it indicates the author didn't care about the topic. The only thing they cared about is to write an "insightful" article about using llms. Hence this whole thing is basically linked-in resume improvement slop. Not worth interacting with, imo Also, it's not insightful whatsoever. It's basically a retelling of other articles around the time Claude code was released to the public (March-August 2025)
View on HN · Topics
I've been doing the exact same thing for 2 months now. I wish I had gotten off my ass and written a blog post about it. I can't blame the author for gathering all the well deserved clout they are getting for it now.
View on HN · Topics
I went through the blog. I started using Claude Code about 2 weeks ago and my approach is practically the same. It just felt logical. I think there are a bunch of us who have landed on this approach and most are just quietly seeing the benefits.
View on HN · Topics
Don’t worry. This advice has been going around for much more than 2 months, including links posted here as well as official advice from the major companies (OpenAI and Anthropic) themselves. The tools literally have had plan mode as a first class feature. So you probably wouldn’t have any clout anyways, like all of the other blog posts.
View on HN · Topics
> the workflow I’ve settled into is radically different from what most people do with AI coding tools This looks exactly like what anthropic recommends as the best practice for using Claude Code. Textbook. It also exposes a major downside of this approach: if you don't plan perfectly, you'll have to start over from scratch if anything goes wrong. I've found a much better approach in doing a design -> plan -> execute in batches, where the plan is no more than 1,500 lines, used as a proxy for complexity. My 30,000 LOC app has about 100,000 lines of plan behind it. Can't build something that big as a one-shot.
View on HN · Topics
We're learning how to be an engineer all over again. The authors process is super-close what we were taught in engineering 101 40 years ago.
View on HN · Topics
It's after we come down from the Vibe coding high that we realize we still need to ship working, high-quality code. The lessons are the same, but our muscle memory has to be re-oriented. How do we create estimates when AI is involved? In what ways do we redefine the information flow between Product and Engineering?
View on HN · Topics
I always feels like I'm in a fever dream when I hear about AI workflows. A lot of stuff is what I've read from software engineering books and articles.
View on HN · Topics
Exactly; the original commenter seems determined to write-off AI as "just not as good as me". The original article is, to me, seemingly not that novel. Not because it's a trite example, but because I've begun to experience massive gains from following the same basic premise as the article. And I can't believe there's others who aren't using like this. I iterate the plan until it's seemingly deterministic, then I strip the plan of implementation, and re-write it following a TDD approach. Then I read all specs, and generate all the code to red->green the tests. If this commenter is too good for that, then it's that attitude that'll keep him stuck. I already feel like my projects backlog is achievable, this year.
View on HN · Topics
Radically different? Sounds to me like the standard spec driven approach that plenty of people use. I prefer iterative approach. LLMs give you incredible speed to try different approaches and inform your decisions. I don’t think you can ever have a perfect spec upfront, at least that’s my experience.
View on HN · Topics
Well, that's already done by Amazon's Kiro [0], Google's Antigravity [1], GitHub's Spec Kit [2], and OpenSpec [3]! [0]: https://kiro.dev/ [1]: https://antigravity.google/ [2]: https://github.github.com/spec-kit/ [3]: https://openspec.dev/
View on HN · Topics
> Read deeply, write a plan, annotate the plan until it’s right, then let Claude execute the whole thing without stopping, checking types along the way. As others have already noted, this workflow is exactly what the Google Antigravity agent (based off Visual Studio Code) has been created for. Antigravity even includes specialized UI for a user to annotate selected portions of an LLM-generated plan before iterating it. One significant downside to Antigravity I have found so far is the fact that even though it will properly infer a certain technical requirement and clearly note it in the plan it generates (for example, "this business reporting column needs to use a weighted average"), it will sometimes quietly downgrade such a specialized requirement (for example, to a non-weighted average), without even creating an appropriate "WARNING:" comment in the generated code. Especially so when the relevant codebase already includes a similar, but not exactly appropriate API. My repetitive prompts to ALWAYS ask about ANY implementation ambiguities WHATSOEVER go unanswered. From what I gather Claude Code seems to be better than other agents at always remembering to query the user about implementation ambiguities, so maybe I will give Claude Code a shot over Antigravity.
View on HN · Topics
This is the way. The practice is: - simple - effective - retains control and quality Certainly the “unsupervised agent” workflows are getting a lot of attention right now, but they require a specific set of circumstances to be effective: - clear validation loop (eg. Compile the kernel, here is gcc that does so correctly) - ai enabled tooling (mcp / cli tool that will lint, test and provide feedback immediately) - oversight to prevent sgents going off the rails (open area of research) - an unlimited token budget That means that most people can't use unsupervised agents. Not that they dont work; Most people have simply not got an environment and task that is appropriate. By comparison, anyone with cursor or claude can immediately start using this approach , or their own variant on it. It does not require fancy tooling. It does not require an arcane agent framework. It works generally well across models. This is one of those few genunie pieces of good practical advice for people getting into AI coding. Simple. Obviously works once you start using it. No external dependencies. BYO tools to help with it, no “buy my AI startup xxx to help”. No “star my github so I can a job at $AI corp too”. Great stuff.
View on HN · Topics
There are frameworks like https://github.com/bmad-code-org/BMAD-METHOD and https://github.github.com/spec-kit/ that are working on encoding a similar kind of approach and process.
View on HN · Topics
The crowd around this pot shows how superficial is knowledge about claude code. It gets releases each day and most of this is already built in the vanilla version. Not to mention subagent working in work trees, memory.md, plan on which you can comment directly from the interface, subagents launched in research phase, but also some basic mcp's like LSP/IDE integration, and context7 to not to be stuck in the knowledge cutoff/past. When you go to YouTube and search for stuff like "7 levels of claude code" this post would be maybe 3-4. Oh, one more thing - quality is not consistent, so be ready for 2-3 rounds of "are you happy with the code you wrote" and defining audit skills crafted for your application domain - like for example RODO/Compliance audit etc.
View on HN · Topics
I've been teaching AI coding tool workshops for the past year and this planning-first approach is by far the most reliable pattern I've seen across skill levels. The key insight that most people miss: this isn't a new workflow invented for AI - it's how good senior engineers already work. You read the code deeply, write a design doc, get buy-in, then implement. The AI just makes the implementation phase dramatically faster. What I've found interesting is that the people who struggle most with AI coding tools are often junior devs who never developed the habit of planning before coding. They jump straight to "build me X" and get frustrated when the output is a mess. Meanwhile, engineers with 10+ years of experience who are used to writing design docs and reviewing code pick it up almost instantly - because the hard part was always the planning, not the typing. One addition I'd make to this workflow: version your research.md and plan.md files in git alongside your code. They become incredibly valuable documentation for future maintainers (including future-you) trying to understand why certain architectural decisions were made.
View on HN · Topics
Lol I wrote about this and been using plan+execute workflow for 8 months. Sadly my post didn't much attention at the time. https://thegroundtruth.media/p/my-claude-code-workflow-and-p...
View on HN · Topics
The author seems to think theyve invented a special workflow... We all tend to regress to average (same thoughts/workflows)... Have had many users already doing the exact same workflow with: https://github.com/backnotprop/plannotator
View on HN · Topics
Haha this is surprisingly and exactly how I use claude as well. Quite fascinating that we independently discovered the same workflow. I maintain two directories: "docs/proposals" (for the research md files) and "docs/plans" (for the planning md files). For complex research files, I typically break them down into multiple planning md files so claude can implement one at a time. A small difference in my workflow is that I use subagents during implementation to avoid context from filling up quickly.
View on HN · Topics
Spec-driven looks very much like what the author describes. He may have some tweaks of his own but they could just as well be coded into the artifacts that something like OpenSpec produces.
View on HN · Topics
I’ve been using Claude through opencode, and I figured this was just how it does it. I figured everyone else did it this way as well. I guess not!
View on HN · Topics
I don't deny that AI has use cases, but boy - the workflow described is boring: "Most developers type a prompt, sometimes use plan mode, fix the errors, repeat. " Does anyone think this is as epic as, say, watch the Unix archives https://www.youtube.com/watch?v=tc4ROCJYbm0 where Brian demos how pipes work; or Dennis working on C and UNIX? Or even before those, the older machines? I am not at all saying that AI tools are all useless, but there is no real epicness. It is just autogenerated AI slop and blob. I don't really call this engineering (although I also do agree, that it is engineering still; I just don't like using the same word here). > never let Claude write code until you’ve reviewed and approved a written plan. So the junior-dev analogy is quite apt here. I tried to read the rest of the article, but I just got angrier. I never had that feeling watching oldschool legends, though perhaps some of their work may be boring, but this AI-generated code ... that's just some mythical random-guessing work. And none of that is "intelligent", even if it may appear to work, may work to some extent too. This is a simulation of intelligence. If it works very well, why would any software engineer still be required? Supervising would only be necessary if AI produces slop.
View on HN · Topics
The baffling part of the article is all the assertions about how this is unique, novel, not the typical way people are doing this etc. There are whole products wrapped around this common workflow already (like Augment Intent).
View on HN · Topics
Since the rise of AI systems I really wonder how people wrote code before. This is exactly how I planned out implementation and executed the plan. Might have been some paper notes, a ticket or a white board, buuuuut ... I don't know.
View on HN · Topics
Google Anti-Gravity has this process built in. This is essentially a cycle a developer would follow: plan/analyse - document/discuss - break down tasks/implement. We’ve been using requirements and design documents as best practice since leaving our teenage bedroom lab for the professional world. I suppose this could be seen as our coding agents coming of age.
View on HN · Topics
Insights are nice for new users but I’m not seeing anything too different from how anyone experienced with Claude Code would use plan mode. You can reject plans with feedback directly in the CLI.
View on HN · Topics
This is a similar workflow to speckit, kiro, gsd, etc.
View on HN · Topics
All sounds like a bespoke way of remaking https://github.com/Fission-AI/OpenSpec
View on HN · Topics
I don't really get what is different about this from how almost everyone else uses Claude Code? This is an incredibly common, if not the most common way of using it (and many other tools).
View on HN · Topics
Sounds a bit like what Claude Plan Mode or Amazon's Kiro were built for. I agree it's a useful flow, but you can also overdo it.
View on HN · Topics
this is literally reinventing claude's planning mode, but with more steps. I think Boris doesn't realize that planning mode is actually stored in a file. https://x.com/boristane/status/2021628652136673282
View on HN · Topics
It is really fun to watch how a baby makes its first steps and also how experienced professionals rediscover what standards were telling us for 80+ years.
View on HN · Topics
That is just spec driven development without a spec, starting with the plan step instead.
View on HN · Topics
Sorry but I didn't get the hype with this post, isnt it what most of the people doing? I want to see more posts on how you use the claude "smart" without feeding the whole codebase polluting the context window and also more best practices on cost efficient ways to use it, this workflow is clearly burning million tokens per session, for me is a No
View on HN · Topics
Is this not just Ralph with extra steps and the risk of context rot?
View on HN · Topics
That's exactly what Cursor's "plan" mode does? It even creates md files, which seems to be the main "thing" the author discovered. Along with some cargo cult science? How is this noteworthy other than to spark a discussion on hn? I mean I get it, but a little more substance would be nice.
View on HN · Topics
This is exactly how I use it.
View on HN · Topics
You described how AntiGravity works natively.
View on HN · Topics
Kiro's spec-based development looks identical. https://kiro.dev/docs/specs/ It looks verbose but it defines the requirements based on your input, and when you approve it then it defines a design, and (again) when you approve it then it defines an implementation plan (a series of tasks.)
View on HN · Topics
I don't see how this is 'radically different' given that Claude Code literally has a planning mode. This is my workflow as well, with the big caveat that 80% of 'work' doesn't require substantive planning, we're making relatively straight forward changes. Edit: there is nothing fundamentally different about 'annotating offline' in an MD vs in the CLI and iterating until the plan is clear. It's a UI choice. Spec Driven Coding with AI is very well established, so working from a plan, or spec (they can be somewhat different) is not novel. This is conventional CC use.
View on HN · Topics
'Giving precise feedback on a plan' is literally annotating the plan. It comes back to you with an update for verification. You ask it to 'write the plan' as matter of good practice. What the author is describing is conventional usage of claude code.