llm/e6f7e516-f0a0-4424-8f8f-157aae85c74e/topic-12-6b2ed862-5256-42a0-8240-cace9bde56ec-input.json
The following is content for you to summarize. Do not respond to the comments—summarize them. <topic> Software Engineering Evolution # Predictions that the discipline is shifting from "writing code" to "managing entropy" and system design. Some view this as empowering "cowboy devs" to move fast, while others fear a future of unmaintainable "vibe coded" software that no human fully understands. </topic> <comments_about_topic> 1. Thinking is not besides the point, it is the entire point. You seem to be defining "thinking" as an interchangeable black box, and as long as something fits that slot and "gets results", it's fine. But it's the code-writing that's the interchangeable black box, not the thinking. The actual work of software development is not writing code, it's solving problems. With a problem-space-navigation model, I'd agree that there are different strategies that can find a path from A to B, and what we call cognition is one way (more like a collection of techniques) to find a path. I mean, you can in principle brute-force this until you get the desired result. But that's not the only thing that thinking does. Thinking responds to changing constraints, unexpected effects, new information, and shifting requirements. Thinking observes its own outputs and its own actions. Thinking uses underlying models to reason from first principles. These strategies are domain-independent, too. And that's not even addressing all the other work involved in reality: deciding what the product should do when the design is underspecified. Asking the client/manager/etc what they want it to do in cases X, Y and Z. Offering suggestions and proposals and explaining tradeoffs. Now I imagine there could be some other processes we haven't conceived of that can do these things but do them differently than human brains do. But if there were we'd probably just still call it 'thinking.' 2. I’m strictly talking about “Agentic” coding here: They are not a silver bullet or truly “you don’t need to know how to code anymore” tools. I’ve done a ton of work with Claude code this year. I’ve gone from a “maybe one ticket a week” tier React developer to someone who’s shipped entire new frontend feature sets, while also managing a team. I’ve used LLM to prototype these features rapidly and tear down the barrier to entry on a lot of simple problems that are historically too big to be a single-dev item, and clear out the backlog of “nice to haves” that compete with the real meat and bread of my business. This prototyping and “good enough” development has been massively impactful in my small org, where the hard problems come from complex interactions between distributed systems, monitoring across services, and lots of low-level machine traffic. LLM’s let me solve easy problems and spend my most productive hours working with people to break down the hard problems into easy problems that I can solve later or pass off to someone on my team to help. I’ve also used LLM to get into other people’s codebases, refactor ancient tech debt, shore up test suites from years ago that are filled with garbage and copy/paste. On testing alone, LLM are super valuable for throwing edge cases at your code and seeing what you assumed vs. what an entropy machine would throw at it. LLM absolutely are not a 10x improvement in productivity on their own. They 100% cannot solve some problems in a sensible, tractable way, and they frequently do stupid things that waste time and would ruin a poor developer’s attempts at software engineering. However, they absolutely also lower the barrier to entry and dethrone “pure single tech” (ie backend only, frontend only, “I don’t know Kubernetes”, or other limited scope) software engineers who’ve previously benefited from super specialized knowledge guarding their place in the business. Software as a discipline has shifted so far from “build functional, safe systems that solve problems” to “I make 200k bike shedding JIRA tickets that require an army of product people to come up with and manage” that LLM can be valuable if only for their capabilities to role-compress and give people with a sense of ownership the tools they need to operate like a whole team would 10 years ago. 3. > If one "doesn't know Kubernetes", what exactly are they supposed to do now, having LLM at hand, in a professional setting? They still "can't" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading. This is the fundamental problem that all these cowboy devs do not even consider. They talk about churning out huge amounts of code as if it was an intrinsically good thing. Reminds me of those awful VB6 desktop apps people kept churning out. Vb6 sure made tons of people nx productive but it also led to loads of legacy systems that no one wanted to touch because they were built by people who didn't know what they were doing . LLMs-for-Code are another tool under the same category. 4. I don’t think the conclusion is right. Your org might still require enough React knowledge to keep you gainfully employed as a pure React dev but if all you did was changing some forms, this is now something pretty much anyone can do. The value of good FE architecture increased if anything since you will be adding code quicker. Making sure the LLM doesn’t stupidly couple stuff together is quite important for long term success 5. I follow at least one GitHub repo (a well respected one that's made the HN front page), and where everything is now Claude coded. Things do move fast, but I'm seriously under impressed with the quality. I've raised a few concerns, some were taken in, others seem to have been shut down with an explanation Claude produced that IMO makes no sense, but which is taken at face value. This matches my personal experience. I was asked to help with a large Swift iOS app without knowing Swift. Had access to a frontier agent. I was able to consistently knock a couple of tickets per week for about a month until the fire was out and the actual team could take over. Code review by the owners means the result isn't terrible, but it's not great either. I leave the experience none the wiser: gained very little knowledge of Swift, iOS development or the project. Management was happy with the productivity boost. I think it's fleeting and dread a time where most code is produced that way, with the humans accumulating very little institutional knowledge and not knowing enough to properly review things. 6. I'm just one data point. Me being unimpressed should not be used to judge their entire work. I feel like I have a pretty decent understanding of a few small corners of what they're doing, and find it a bad omen that they've brushed aside some of my concerns. But I'm definitely not knowledgeable enough about the rest of it all. What concerns me is, generally, if the experts (and I do consider them experts) can use frontier AI to look very productive, but upon close inspection of something you (in this case I) happen to be knowledgeable about, it's not that great (built on shaky foundations), what about all the vibe coded stuff built by non-experts? 7. Seriously, I’m lucky if 10% of what I do in a week is writing code. I’m doubly lucky if, when I do, it doesn’t involve touching awful corporate horse-shit like low-code products that are allergic to LLM aid, plus multiple git repos, plus having knowledge from a bunch of “cloud” dashboard and SaaS product configs. By the time I prompt all that external crap in I could have just written what I wanted to write. Writing code is the easy and fast part already . 8. I like coming up with the system design and the low level pseudo code, but actually translating it to the specific programming language and remembering the exact syntax or whatnot I find pretty uninspiring. Same with design docs more or less, translating my thoughts into proper and professional English adds a layer I don't really enjoy (since I'm not exactly great at it), or stuff like formatting, generating a nice looking diagram, etc. Just today I wrote a pretty decent design doc that took me two hours instead of the usual week+ slog/procrastination, and it was actually fairly enjoyable. 9. "People using AI" had a meaningful change when they "joined the workforce" in 2025. We may not have gotten fully-autonomous employees, but human employees using AI are doing way more than they could before, both in depth and scale. Claude Code is basically a full-time "employee" on my (profitable) open source projects, but it's still a tool I use to do all the work. Claude Code is basically a full-time "employee" at my job, but it's still a tool I use to do all the work. My workload has shifted to high-level design decisions instead of writing the code, which is kind of exactly what would have happened if AI "joined the workforce" and I had a bunch of new hires under me. I do recognize this article is largely targeted at non-dev workforces though, where it _largely_ holds up but most of my friends outside of the tech world have either gotten new jobs thanks to increased capability through AI or have severely integrated AI into whatever workflows they're doing at work (again, as a tool) and are excelling compared to employees who don't utilize AI. 10. AI improvements will chase the bottlenecks if product spec begins to hamper the dev process, guess what'll be the big focus on e.g. that year's YC 11. A brief history of programming: 1. Punch cards -> Assembly languages 2. Assembly languages -> Compiled languages 3. Compiled languages -> Interpreted languages 4. Interpreted languages -> Agentic LLM prompting I've tried the latest and greatest agentic CLI and toolings with the public SOTA models. I think this is a productivity jump equivalent to maybe punch cards -> compiled languages, and that's it. Something like a 40% increase, but nowhere close to exponential. 12. Isn't that what polishing 'the prompt' does? Refine the communication like an editor does for a publication? Only in this case it's instructions for how to get a transformer to mine an existing set of code to produce some sort of vaguely useful output. The human factor adds knowledge of the why that refines the results. Not just any algorithm or a standard pattern that fits, but the correct solution for the correct question. 13. people talking as if communication overhead is bad. That overhead makes someone else able to substitute for you (or other one) when needs happen, and sometimes can discover concerns earlier. 14. That fits, except later on they say > And now I'm preparing for my post-software career because that coworker is going to be me in a few years. Which implies they anticipate their manager (or someone higher up in the company) to agree with them, presumably when considering overall effectiveness of the team. 15. Why would a company pocket the savings of less labor when they could reinvest the productivity gains of AI in more labor, shifting employees to higher-level engineering tasks? 16. In some companies, “one of your coworkers” have the skills to create & improve upon AI models themselves. Honestly at staff level I’d expect you to be able to do a literature review and start proposing architecture improvements within a year. Charitably, it just sounds like you aren’t in tech. </comments_about_topic> Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.
Software Engineering Evolution # Predictions that the discipline is shifting from "writing code" to "managing entropy" and system design. Some view this as empowering "cowboy devs" to move fast, while others fear a future of unmaintainable "vibe coded" software that no human fully understands.
16