llm/8632d754-c7a3-4ec2-977a-2733719992fa/topic-7-c5ac4bb5-ed90-4e31-9d2e-c2523cec2340-input.json
The following is content for you to summarize. Do not respond to the comments—summarize them. <topic> Architects vs. Builders Analogy # Extensive debate using construction analogies to describe the shift in the developer's role. Comparisons are made between architects (who design and delegate) and builders, with arguments about whether AI users are 'vibe architects' who don't understand the materials, or professional engineers utilizing modern equivalents of CAD software and heavy machinery. </topic> <comments_about_topic> 1. Architects went from drawing everything on paper, to using CAD products over a generation. That's a lot of years! They're still called architects. Our tooling just had a refresh in less than 3 years and it leaves heads spinning. People are confused, fighting for or against it. Torn even between 2025 to 2026. I know I was. People need a way to describe it from 'agentic coding' to 'vibe coding' to 'modern AI assisted stack'. We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work! We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel... When was the last time you reviewed the machine code produced by a compiler? ... The real issue this industry is facing, is the phenomenal speed of change. But what are we really doing? That's right, programming. 2. "When was the last time you reviewed the machine code produced by a compiler?" Compilers will produce working output given working input literally 100% of my time in my career. I've never personally found a compiler bug. Meanwhile AI can't be trusted to give me a recipe for potato soup. That is to say, I would under no circumstances blindly follow the output of an LLM I asked to make soup. While I have, every day of my life, gladly sent all of the compiler output to the CPU without ever checking it. The compiler metaphor is simply incorrect and people trying to say LLMs compile English into code insult compiler devs and English speakers alike. 3. There's also no canonical way to write software, so in that sense generating code is more similar to coming up with a potato soup recipe than compiling code. 4. You need to put this revolution in scale with other revolutions. How long did it take for horses to be super-seeded by cars? How long did powertool take to become the norm for tradesmen? This has gone unbelievably fast. 5. > We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work! > We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel... > When was the last time you reviewed the machine code produced by a compiler? Sure, because those are categorically different. You are describing shortcuts of two classes: boilerplate (library of things) and (deterministic/intentional) automation. Vibe coding doesn't use either of those things. The LLM agents involved might use them, but the vibe coder doesn't. Vibe coding is delegation , which is a completely different class of shortcut or "tool" use. If an architect delegates all their work to interns, directs outcomes based on whims not principals, and doesn't actually know what the interns are delivering, yeah, I think it would be fair to call them a vibe architect. We didn't have that term before, so we usually just call those people "arrogant pricks" or "terrible bosses". I'm not super familiar but I feel like Steve Jobs was pretty famously that way - thus if he was an engineer, he was a vibe engineer. But don't let this last point detract from the message, which is that you're describing things which are not really even similar to vibe coding. 6. I think you are right in placing emphasis on delegation. There’s been a hypothesis floating around that I find appealing. Seemingly you can identify two distinct groups of experienced engineers. Manager, delegator, or team lead style senior engineers are broadly pro-AI. The craftsman, wizard, artist, IC style senior engineers are broadly anti-AI. But coming back to architects, or most professional services and academia to be honest, I do think the term vibe architect as you define it is exactly how the industry works. An underclass of underpaid interns and juniors do the work, hoping to climb higher and position themselves towards the top of the ponzi-like pyramid scheme. 7. Architects still need to learn to draw manually quite well to pass exams and stuff. 8. > We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work! Architect's copy-pasting is equivalent to a software developer reusing a tried and tested code library. Generating or writing new code is fundamentally different and not at all comparable. > We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel... We would call them "vibe builders" if their machines threw bricks around randomly and the builders focused all of their time on engineering complex scaffolding around the machines to get the bricks flying roughly in the right direction. But we don't because their machines, like our compilers and linters, do one job and they do it predictably. Most trades spend obscene amounts of money on tools that produce repeatable results. > That's a lot of years! They're still called architects. Because they still architect, they don't subcontract their core duties to architecture students overseas and just sign their name under it. I find it fitting and amusing that people who are uncritical towards the quality of LLM-generated work seem to make the same sorts of reasoning errors that LLMs do. Something about blind spots? 9. Reasoning by analogy is usually a bad idea, and nowhere is this worse than talking about software development. It’s just not analogous to architecture, or cooking, or engineering. Software development is just its own thing. So you can’t use analogy to get yourself anywhere with a hint of rigour. The problem is, AI is generating code that may be buggy, insecure, and unmaintainable. We have as a community spent decades trying to avoid producing that kind of code. And now we are being told that productivity gains mean we should abandon those goals and accept poor quality, as evidenced by MoltBook’s security problems. It’s a weird cognitive dissonance and it’s still not clear how this gets resolved. 10. Don't take this as criticizing LLMs as a whole, but architects also don't call themselves engineers. Engineers are an entirely distinct set of roles that among other things validate the plan in its totality, not only the "new" 1/5th. Our job spans both of these. "Architect" is actually a whole career progression of people with different responsibilities. The bottom rung used to be the draftsmen, people usually without formal education who did the actual drawing. Then you had the juniors, mid-levels, seniors, principals, and partners who each oversaw different aspects. The architects with their name on the building were already issuing high level guidance before the transition instead of doing their own drawings. When was the last time you reviewed the machine code produced by a compiler? Last week, to sanity check some code written by an LLM. 11. > Engineers are an entirely distinct set of roles that among other things validate the plan in its totality, not only the "new" 1/5th. Our job spans both of these. Where this analogy breaks down is that the work you’re describing is done by Professional Engineers that have strict licensing and are (criminally) liable for the end result of the plans they approve. That is an entirely different role from the army of civil, mechanical, and electrical engineers (some who are PEs and some who are not) who do most of the work for the principal engineer/designated engineer/engineer of record, that have to trust building codes and tools like FEA/FEM that then get final approval from the most senior PE. I don’t think the analogy works, as software engineers rarely report to that kind of hierarchy. Architects of Record on construction projects are usually licensed with their own licensing organization too, with layers of licensed and unlicensed people working for them. 12. That diversity of roles is what "among other things" was meant to convey. My job at least isn't terribly different, except that licensing doesn't exist and I don't get an actual stamp. My company (and possibly me depending on the facts of the situation) is simply liable if I do something egregious that results in someone being hurt. 13. > Where this analogy breaks down is that the work you’re describing is done by Professional Engineers that have strict licensing and are (criminally) liable for the end result of the plans they approve. there are plenty of software engineers that work in regulated industries, with individual licensing, criminal liability, and the ability to be struck off and banned from the industry by the regulator ... such as myself 14. Sure. But no one stops you from writing software again. It's not that PE's can't design or review buildings in whatever city the egregious failure happened. It's that PE's can't design or review buildings at all in any city after an egregious failure. It's not that PE's can't design or review hospital building designs because one of their hospital designs went so egregiously sideways. It's that PE's can't design or review any building for any use because their design went so egregiously sideways. I work in an FDA regulated software area. I need 510k approval and the whole nine. But if I can't write regulated medical or dental software anymore, I just pay my fine and/or serve my punishment and go sling React/JS/web crap or become a TF/PyTorch monkey. No one stops me. Consequences for me messing up are far less severe than the consequences for a PE messing up. I can still write software because, in the end, I was never an "engineer" in that hard sense of the word. Same is true of any software developer. Or any unlicensed area of "engineering" for that matter. We're only playing at being "engineers" with the proverbial "monopoly money". We lose? Well, no real biggie. PE's agree to hang a sword of damocles over their own heads for the lifetime of the bridge or building they design. That's a whole different ball game. 15. > Consequences for me messing up are far less severe than the consequences for a PE messing up. if I approve a bad release that leads to an egregious failure, for me it's a prison sentence and unlimited fines in addition to being struck off and banned from the industry > That's a whole different ball game. if you say so 16. > if I approve a bad release that leads to an egregious failure, for me it's a prison sentence and unlimited fines Again, I'm in 510k land. The same applies to myself. No one's gonna allow me to irradiate a patient with a 10x dose because my bass ackwards software messed up scientific notation. To remove the wrong kidney because I can't convert orthonormal basis vectors correctly. But the fact remains that no one would stop either of us from writing software in the future in some other domain. They do stop PE's from designing buildings in the future in any other domain. By law. So it's very much a different ball game. After an egregious error, we can still practice our craft, because we aren't "engineers" at the end of the day. (Again, "engineers" in that hard sense of the word.) PE's can't practice their craft any longer after an egregious error. Because they are "engineers" in that hard sense of the word. 17. It's not about the tooling it's about the reasoning. An architect copy pasting existing blueprints is still in charge and has to decide what the copy paste and where. Same as programmer slapping a bunch of code together, plumbing libraries or writing fresh code. They are the ones who drive the logical reasoning and the building process. The ai tooling reverses this where the thinking is outsourced to the machine and the user is borderline nothing more than a spectator, an observer and a rubber stamp on top. Anyone who is in this position seriously need to think their value added. How do they plan to justify their position and salary to the capital class. If the machine is doing the work for you, why would anyone pay you as much as they do when they can just replace you with someone cheaper, ideally with no-one for maximum profit. Everyone is now in a competition not only against each other but also against the machine. And any specialized. Expert knowledge moat that you've built over decades of hard work is about to evaporate. This is the real pressing issue. And the only way you can justify your value added, your position, your salary is to be able to undermine the AI, find flaws in it's output and reasoning. After all if/when it becomes flawless you have no purpose to the capital class! 18. > We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work! Maybe not, but we don't allow non-architects to vomit out thousands of diagrams that they cannot review, and that is never reviewed, which are subsequently used in the construction of the house. Your analogy to s/ware is fatally and irredeemably flawed, because you are comparing the regulated and certification-heavy production of content, which is subsequently double-checked by certified professionals, with an unregulated and non-certified production of content which is never checked by any human. 19. I don't see a flaw, I think you're just gatekeeping software creation. Anyone can pick up some CAD software and design a house if they so desire. Is the town going to let you build it without a certified engineer/architect signing off? Fuck no. But we don't lock down CAD software. And presumably, mission critical software is still going to be stamped off on by a certified engineer of some sort. 20. > Anyone can pick up some CAD software and design a house if they so desire. Is the town going to let you build it without a certified engineer/architect signing off? Fuck no. But we don't lock down CAD software. No, we lock down using that output from the CAD software in the real world. > And presumably, mission critical software is still going to be stamped off on by a certified engineer of some sort. The "mission critical" qualifier is new to your analogy, but is irrelevant anyway - the analogy breaks because, while you can do what you like with CAD software on your own PC, that output never gets used outside of your PC without careful and multiple levels of review, while in the s/ware case, there is no review. 21. I am not really sure what you are getting at here. Are you suggesting that people should need to acquire some sort of credential to be allowed to code? 22. > Are you suggesting that people should need to acquire some sort of credential to be allowed to code? No, I am saying that you are comparing professional $FOO practitioners to professional $BAR practitioners, but it's not a valid comparison because one of those has review and safety built into the process, and the other does not. You can't use the assertion "We currently allow $FOO practitioners to use every single bit of automation" as evidence that "We should also allow $BAR practitioners to use every bit of automation", because $FOO output gets review by certified humans, and $BAR output does not. 23. > We don't call architects 'vibe architects' even though (…) > We don't call builders 'vibe builders' for (…) > When was the last time (…) None of those are the same thing. At all. They are still all deterministic approaches. The architect’s library of things doesn’t change every time they use it or present different things depending on how they hold it. It’s useful because it’s predictable. Same for all your other examples. If we want to have an honest discussion about the pros and cons of LLM-generated code, proponents need to stop being dishonest in their comparisons. They also need to stop plugging their ears and not ignore the other issues around the technology. It is possible to have something which is useful but whose advantages do not outweigh the disadvantages. 24. I think the word predictable is doing a bit of heavy lifting there. Lets say you shovel some dirt, you’ve got a lot of control over where you get it from and where you put it.. Now get in your big digger’s cabin and try to have the same precision. At the level of a shovel-user, you are unpredictable even if you’re skilled. Some of your work might be out a decent fraction of the width of a shovel. That’d never happen if you did it the precise way! But you have a ton more leverage. And that’s the game-changer. 25. That’s another dishonest comparison. Predictability is not the same as precision. You don’t need to be millimetric when shovelling dirt at a construction site. But you do need to do it when conducting brain surgery. Context matters. 26. Sure. If you’re racing your runway to go from 0 to 100 users you’d reach for a different set of tools than if you’re contributing to postgres. In other words I agree completely with you but these new tools open up new possibilities. We have historically not had super-shovels so we’ve had to shovel all the things no matter how giant or important they are. 27. > these new tools open up new possibilities. I’m not disputing that. What I’m criticising is the argument from my original parent post of comparing it to things which are fundamentally different, but making it look equivalent as a justification against criticism. 28. I think this is the crux of why, when used as an enhancement to solo productivity, you'll have a pretty strict upper bound on productivity gains given that it takes experienced engineers to review code that goes out at scale. That being said, software quality seems to be decreasing, or maybe it's just cause I use a lot of software in a somewhat locked down state with adblockers and the rest. Although, that wouldn't explain just how badly they've murdered the once lovely iTunes (now Apple Music) user interface. (And why does CMD-C not pick up anything 15% of the time I use it lately...) Anyways, digressions aside... the complexity in software development is generally in the organizational side. You have actual users, and then you have people who talk to those users and try to see what they like and don't like in order to distill that into product requirements which then have to be architected, and coordinated (both huge time sinks) across several teams. Even if you cut out 100% of the development time, you'd still be left with 80% of the timeline. Over time though... you'll probably see people doing what I do all day (which is move around among many repositories (although I've yet to use the AI much, got my Cursor license recently and am gonna spin up some POCs that I want to see soon)), enabled by their use of AI to quickly grasp what's happening in the repo, and the appropriate places to make changes. Enabling developers to complete features from tip to tail across deep, many pronged service architectures would could bring project time down drastically and bring project management, and cross team coordination costs down tremendously. Similarly, in big companies, the hand is often barely aware at best of the foot. And space exploration is a serious challenge. Often folk know exactly one step away, and rely on well established async communication channels which also only know one step further. Principal engineers seem to know large amounts about finite spaces and are often in the dark small hops away to things like the internal tooling for the systems they're maintaining (and often not particularly great at coming in to new spaces and thinking with the same perspective... no we don't need individual micro services for every 12 request a month admin api group we want to set up). Once systems can take a feature proposal and lay out concrete plans which each little kingdom can give a thumbs up or thumbs down to for further modifications, you can again reduce exploration, coordination, and architecture time down. Sadly, seems like User Experience design is an often terribly neglected part of our profession. I love the memes about an engineer building the perfect interface like a water pitcher only for the person to position it weirdly in order to get a pour out of the fill hole or something. Lemme guess how many users you actually talked to (often zero), and how many layers of distillation occurred before you received a micro picture feature request that ends up being build and taking input from engineers with no macro understanding of a user's actual needs, or day to day. And who often are much more interested in perfecting some little algorithm thank thinking about enabling others. So my money is on money flowing to... - People who can actually verify system integrity, and can fight fires and bugs (but a lot of bug fixing will eventually becoming prompting?) - Multi-talented individuals who can say... interact with users well enough to understand their needs as well as do a decent job verifying system architecture and security It's outside of coding where I haven't seen much... I guess people use it to more quickly scaffold up expense reports, or generate mocks. So, lots of white collar stuff. But... it's not like the experience of shopping at the supermarket has changed, or going to the movies, or much of anything else. 29. > It's a shame that AI coding tools have become such a polarizing issue among developers. Frankly I'm so tired of the usual "I don't find myself more productive", "It writes soup". Especially when some of the best software developers (and engineers) find many utility in those tools, there should be some doubt growing in that crowd. I have come to the conclusion that software developers , those only focusing on the craft of writing code are the naysayers. Software engineers immediately recognize the many automation/exploration/etc boosts, recognize the tools limits and work on improving them. Hell, AI is an insane boost to productivity, even if you don't have it write a single line of code ever . But people that focus on the craft (the kind of crowd that doesn't even process the concept of throwaway code or budgets or money) will keep laying in their "I don't see the benefits because X" forever, nonsensically confusing any tool use with vibe coding. I'm also convinced that since this crowd never had any notion of what engineering is (there is very little of it in our industry sadly, technology and code is the focus and rarely the business, budget and problems to solve) and confused it with architectural, technological or best practices they are genuinely insecure about their jobs because once their very valued craft and skills are diminished they pay the price of never having invested in understanding the business, the domain, processes or soft skills. 30. This is actually an aspect of using AI tools I really enjoy: Forming an educated intuition about what the tool is good at, and tastefully framing and scoping the tasks I give it to get better results. It cognitively feels very similar to other classic programming activities, like modularization at any level from architecture to code units/functions, thoughtfully choosing how to lay out and chunk things. It's always been one of the things that make programming pleasurable for me, and some of that feeling returns when slicing up tasks for agents. 31. I agree that framing and scoping tasks is becoming a real joy. The great thing about this strategy is there's a point at which you can scope something small enough that it's hard for the AI to get it wrong and it's easy enough for you as a human to comprehend what it's done and verify that it's correct. I'm starting to think of projects now as a tree structure where the overall architecture of the system is the main trunk and from there you have the sub-modules, and eventually you get to implementations of functions and classes. The goal of the human in working with the coding agent is to have full editorial control of the main trunk and main sub-modules and delegate as much of the smaller branches as possible. Sometimes you're still working out the higher-level architecture, too, and you can use the agent to prototype the smaller bits and pieces which will inform the decisions you make about how the higher-level stuff should operate. 32. [Edit: I may have been replying to another comment in my head as now I re-read it and I'm not sure I've said the same thing as you have. Oh well.] I agree. This is how I see it too. It's more like a shortcut to an end result that's very similar (or much better) than I would've reached through typing it myself. The other day I did realise that I'm using my experience to steer it away from bad decisions a lot more than I noticed. It feels like it does all the real work, but I have to remember it's my/our (decades of) experience writing code playing a part also. I'm genuinely confused when people come in at this point and say that it's impossible to do this and produce good output and end results. 33. It is not a mind reader. I enjoy giving it feedback because it shows I am in charge of the engineering. I also love using it for research for upcoming features. Research + pick a solution + implement. It happens so fast. 34. What a lovely read. Thank you for sharing your experience. The human-agent relationship described in the article made me wonder: are natural, or experienced, managers having more success with AI as subordinates than people without managerial skill? Are AI agents enormously different than arbitrary contractors half a world away where the only communication is daily text exchanges? 35. AI adoption is being heavily pushed at my work and personally I do use it, but only for the really "boilerplate-y" kinds of code I've already written hundreds of times before. I see it as a way to offload the more "typing-intensive" parts of coding (where the bottleneck is literally just my WPM on the keyboard) so I have more time to spend on the trickier "thinking-intensive" parts. 36. He explicitly said "I don't work for, invest in, or advise any AI companies." in the article. But yes, Hashimoto is a high profile CEO/CTO who may well have an indirect, or near-future interest in talking up AI. HN articles extoling the productivity gains of Claude on HN do generally tend to be from older, managerial types (make of that what you will). 37. >A lot of us dismiss AI because "it can't be trusted to do as good a job as me" Some of us enjoy learning how systems work, and derive satisfaction from the feeling of doing something hard, and feel that AI removes that satisfaction. If I wanted to have something else write the code, I would focus on becoming a product manager, or a technical lead. But as is, this is a craft, and I very much enjoy the autonomy that comes with being able to use this skill and grow it. </comments_about_topic> Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.
Architects vs. Builders Analogy # Extensive debate using construction analogies to describe the shift in the developer's role. Comparisons are made between architects (who design and delegate) and builders, with arguments about whether AI users are 'vibe architects' who don't understand the materials, or professional engineers utilizing modern equivalents of CAD software and heavy machinery.
37