llm/065c6e83-d0d5-4aca-be3d-92768a8a3506/topic-16-4a42dc3e-c300-4552-b11b-6036f3a5d6e0-input.json
The following is content for you to summarize. Do not respond to the comments—summarize them. <topic> Greenfield vs Existing Codebases # Observation that most AI coding articles focus on greenfield development. Different challenges when working with legacy code and established patterns. </topic> <comments_about_topic> 1. I'm still waiting for the multi-years success stories. Greenfield solutions are always easy (which is why we have frameworks that automate them). But maintaining solutions over years is always the true test of any technologies. It's already telling that nothing has staying power in the LLMs world (other than the chat box). Once the limitations can no longer be hidden by the hype and the true cost is revealed, there's always a next thing to pivot to. 2. In 5 min you are one shotting smaller changes to the larger code base right? Not the entire 80k likes which was the other comments point afaict. 3. Yeah, then I guess I misunderstood the post. Its smaller features one by one ofc. 4. The LLM will do what you ask it to unless you don't get nuanced about it. Myself and others have noticed that LLM's work better when your codebase is not full of code smells like massive godclass files, if your codebase is discrete and broken up in a way that makes sense, and fits in your head, it will fit in the models head. 5. > but I've never seen it develop something more than trivial correctly. What are you working on? I personally haven't seen LLMs struggle with any kind of problem in months. Legacy codebase with great complexity and performance-critical code. No issue whatsoever regardless of the size of the task. 6. Ever worked on a distributed system with hundreds of millions of customers and seemingly endless business requirements? Some things are complex. 7. I'll bite, because it does seem like something that should be quick in a well-architected codebase. What was the situation? Was there something in this codebase that was especially suited to AI-development? Large amounts of duplication perhaps? 8. Well someone who says logging is easy never knows the difficulty of deciding "what" to log. And audit log is different beast altogether than normal logging 9. Audit logging is different than developer logging… companies will have entire teams dedicated to audit systems. 10. Most of these AI coding articles seem to be about greenfield development. That said, if you're on a serious team writing professional software there is still tons of value in always telling AI to plan first, unless it's a small quick task. This post just takes it a few steps further and formalizes it. I find Cursor works much more reliably using plan mode, reviewing/revising output in markdown, then pressing build. Which isn't a ton of overhead but often leads to lots of context switching as it definitely adds more time. 11. For the last few days I've been working on a personal project that's been on ice for at least 6 years. Back when I first thought of the project and started implementing it, it took maybe a couple weeks to eke out some minimally working code. This new version that I'm doing (from scratch with ChatGPT web) has a far more ambitious scope and is already at the "usable" point. Now I'm primarily solidifying things and increasing test coverage. And I've tested the key parts with IRL scenarios to validate that it's not just passing tests; the thing actually fulfills its intended function so far. Given the increased scope, I'm guessing it'd take me a few months to get to this point on my own, instead of under a week, and the quality wouldn't be where it is. Not saying I haven't had to wrangle with ChatGPT on a few bugs, but after a decent initial planning phase, my prompts now are primarily "Do it"s and "Continue"s. Would've likely already finished it if I wasn't copying things back and forth between browser and editor, and being forced to pause when I hit the message limit. 12. Looks good. Question - is it always better to use a monorepo in this new AI world? Vs breaking your app into separate repos? At my company we have like 6 repos all separate nextjs apps for the same user base. Trying to consolidate to one as it should make life easier overall. 13. It really depends but there’s nothing stopping you from just creating a separate folder with the cloned repositories (or worktrees) that you need and having a root CLAUDE.md file that explains the directory structure and referencing the individual repo CLAUDE.md files. 14. Just put all the repos in all in one directory yourself. In my experience that works pretty well. 15. AI is happy to work with any directory you tell it to. Agent files can be applied anywhere. 16. Has anyone found a efficient way to avoid repeating the initial codebase assessment when working with large projects? There are several projects on GitHub that attempt to tackle context and memory limitations, but I haven’t found one that consistently works well in practice. My current workaround is to maintain a set of Markdown files, each covering a specific subsystem or area of the application. Depending on the task, I provide only the relevant documents to Claude Code to limit the context scope. It works reasonably well, but it still feels like a manual and fragile solution. I’m interested in more robust strategies for persistent project context or structured codebase understanding. </comments_about_topic> Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.
Greenfield vs Existing Codebases # Observation that most AI coding articles focus on greenfield development. Different challenges when working with legacy code and established patterns.
16