llm/fa6df919-50f4-440a-804d-6a9d3e9721d8/topic-11-7a7eee01-a0be-4e78-86d4-868fa29967d6-input.json
The following is content for you to summarize. Do not respond to the comments—summarize them. <topic> LLM Usage Skill Requirements # Arguments that getting value from LLMs requires skill, experience to recognize good and bad output, and knowing what questions to ask </topic> <comments_about_topic> 1. Something I like about our weird new LLM-assisted world is the number of people I know who are coding again, having mostly stopped as they moved into management roles or lost their personal side project time to becoming parents. AI assistance means you can get something useful done in half an hour, or even while you are doing other stuff. You don't need to carve out 2-4 hours to ramp up any more. If you have significant previous coding experience - even if it's a few years stale - you can drive these things extremely effectively. Especially if you have management experience, quite a lot of which transfers to "managing" coding agents (communicate clearly, set achievable goals, provide all relevant context.) 2. Knowing how to use LLMs is a skill. Just winging it without any practice or exploration of how the tool fails can produce poor results. 3. "You're holding it wrong" 99% of an LLM's usefulness vanishes, if it behaves like an addled old man. "What's that sonny? But you said you wanted that!" "Wait, we did that last week? Sorry let me look at this again" "What? What do you mean, we already did this part?!" 4. They work better with project context and access to tools, so yeah, the web interface is not their best foot forward. That doesn't mean the agents are amazing, but they can be useful. 5. > That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." Yes, but I can't stop them, can you? > But I'm glad you're able to have your fun. Unfortunately I have to be practical. > Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software. Almost all these BigCos are using their internal code bases as material for their own LLMs. They're also increasingly instructing their devs to code primarily using LLMs. The hope that they'll run out of relevant material is slim. Oh, and at this point it's less about the core/kernel/LLMs than it is about building ol' fashioned procedural tooling aka code around the LLM, so that it can just REPL like a human. Turns out a lot of regular coding and debugging is what a machine would do, READ-EVAL-PRINT. I have no idea how far they're going to go, but the current iteration of Claude Code can generate average or better code, which is an improvement in many places. 6. Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. There will never be a time when we need to solve this manually like it's 2019. Even in 2019 we would probably have used Google, solving was already based on extensive web resources. While in 1995 you would really have needed to do it manually. Instead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle. How do we automate our human in the loop vibe reactions? 7. In practice, I find it depends on your work scale, topic and cadence. I started on the $20 plans for a bit of an experiment, needing to see about this whole AI thing. And for the first month or two that was enough to get the flavor. It let me see how to work. I was still copy/pasting mostly, thinking about what to do. As i got more confident i moved to the agents and the integrated editors. Then i realised i could open more than one editor or agent at a time while each AI instance was doing its work. I discovered that when I'm getting the AI agents to summarise, write reports, investigate issues, make plans, implement changes, run builds, organise git, etc, now I can alt-tab and drive anywhere between 2-6 projects at once, and I don't have to do any of the boring boiler plate or administrivia, because the AI does that, it's what its great for. What used to be unthinkable and annoying context switching now lets me focus in on different parts of the project that actually matter, firing off instructions, providing instructions to the next agent, ushering them out the door and then checking on the next intern in the queue. Give them feedback on their work, usher them on, next intern. The main task now is kind of managing the scope and context-window of each AI, and how to structure big projects to take advantage of that. Honestly though, i don't view it as too much more than functional decomposition. You've still got a big problem, now how do you break it down. At this rate I can sustain the $100 claude plan, but honestly I don't need to go further than that, and that's basically me working full time in parallel streams, although i might be using it at relatively cheap times, so it or the $200 plan seems about right for full time work. I can see how theoretically you could go even above that, going into full auto-pilot mode, but I feel i'm already at a place of diminishing marginal returns, i don't usually go over the $100 claude code plan, and the AIs can't do the complex work reliably enough to be left alone anyway. So at the moment if you're going full time i feel they're the sweet spot. The $20 plans are fine for getting a flavor for the first month or two, but once you come up to speed you'll breeze past their limitations quickly. 8. This goes further into LLM usage than I prefer to go. I learn so much better when I do the research and make the plan myself that I wouldn’t let an LLM do that part even if I trusted the LLM to do a good job. I basically don’t outsource stuff to an LLM unless I know roughly what to expect the LLM output to look like and I’m just saving myself a bunch of typing. “Could you make me a Go module with an API similar to archive/tar.Writer that produces a CPIO archive in the newcx format?” was an example from this project. 9. Yeah, this is a lot of what I'm doing with LLM code generation these days: I've been there, I've done that, I vaguely know what the right code would look like when I see it. Rather than spend 30-60 minutes refreshing myself to swap the context back into my head, I prompt Claude to generate a thing that I know can be done. Much of the time, it generates basically what I would have written, but faster. Sometimes, better, because it has no concept of boredom or impatience while it produces exhaustive tests or fixes style problems. I review, test, demand refinements, and tweak a few things myself. By the end, I have a working thing and I've gotten a refresher on things anyway. 10. Look, yeah one shotting stuff makes generic UIs, impressive feat but generic its getting years of sideprojects off the ground for me now in languages I never learned or got professional validation for: rust, lua for roblox … in 2 parallel terminal windows and Claude Code instances all while I get to push frontend development further and more meticulously in a 3rd. UX heavy design with SVG animations? I can do that now, thats fun for me I can make experiences that I would never spend a business Quarter on, I can rapidly iterate in designs in a way I would never pay a Fiverr contractor or three for for me the main skill is knowing what I want, and its entirely questionable about whether that’s a moat at all but for now it is because all those “no code” seeking product managers and ideas guys are just enamored that they can make a generic something compile I know when to point out the AI contradicted itself in a code concept, when to interrupt when its about to go off the rails So far so great and my backend deployment proficiency has gone from CRUD-app only to replicating, understanding and superpassing what the veteran backend devs on my teams could do I would previously call myself full stack, but knowing where my limits in understanding are 11. Exactly. What makes it even more odd for me is they are mostly describing doing nothing when using their agents. I see the "providing important context, setting guardrails, orchestration" bits appended, and it seems like the most shallow, narrowest moat one can imagine. Why do people believe this part is any less tractable for future LLMs? Is it because they spent years gaining that experience? Some imagined fuzziness or other hand-waving while muttering something about the nature of "problem spaces"? That is the case for everything the LLMs are toppling at the moment. What is to say some new pre-training magic, post-training trick, or ingenious harness won't come along and drive some precious block of your engineering identity into obsolescence? The bits about 'the future is the product' are even stranger (the present is already the product?). To paraphrase theophite on Bluesky, people seem to believe that if there is a well free for all to draw from, that there will still exist a substantial market willing to pay them to draw from this well. 12. Many of the same skills that we honed by investing that time and effort into being good software developers make us good AI prompters, we simply moved another layer of abstraction up the stack. 13. I don't agree. I've recently started using claude more than dabbling and I'm getting good use out of it. Not every task will be suitable at the moment, but many are. Give claude lots of direction (I've been creating instructions.txt files) and iterate on those. Ask claude to generate a plan and write it out to a file. Read the file, correct what needs correcting, then get it to implement. It works pretty well, you'll probably be surprised. I'm still doing a lot of thought work, but claude is writing a lot of the actual code. 14. The middle step is asking an LLM how it's done and making the change yourself. You skip the web junk and learn how it's done for next time. 15. Or, given that OP is presumably a developer who just doesn't focus fully on front end code they could skip straight to checking MDN for "center div" and get a How To article ( https://developer.mozilla.org/en-US/docs/Web/CSS/How_to/Layo... ) as the first result without relying on spicy autocomplete. Given how often people acknowledge that ai slop needs to be verified, it seems like a shitty way to achieve something like this vs just checking it yourself with well known good reference material. 16. Yes, I worry about this quite a bit. Obviously nobody knows yet how it will shake out, but what I've been noticing so far is that brand recognition is becoming more important. This is obviously not a good thing for startup yokels like me, but it does provide an opportunity for quality and brand building. The initial creation and generation is indeed much easier now, but testing, identifying, and fixing bugs is still very much a process that takes some investment and effort, even when AI assisted. There is also considerable room for differentiation among user flows and the way people interact with the app. AI is not good at this yet, so the prompter needs to be able to identify and direct these efforts. I've also noticed in some of my projects, even ones shipped into production in a professional environment, there are lots of hard to fix and mostly annoying bugs that just aren't worth it, or that take so much research and debugging effort that we eventually gave up and accepted the downsides. If you give the AI enough guidance to know what to hunt for, it is getting pretty good at finding these things. Often the suggested fix is a terrible idea, but The AI will usually tell you enough about what is wrong that you can use your existing software engineering skills and experience to figure out a good path forward. At that point you can either fix it yourself, or prompt the AI to do it. My success rate doing this is still only at about 50%, but that's half the bugs that we used to live with that we no longer do, which in my opinion has been a huge positive development. 17. I'll also argue that level of skill depends on what one can make in those two days... it's like a mirror. If you don't know what to ask for, it doesn't know what to produce 18. All fair points, I think I agree with your take overall but we might each be focusing on situations involving different levels of capital, time, and skill: I'm imagining situations where AI use brought the barrier down substantially for some entrants, but the barriers still meaningfully exist, while it sounds to me like you're considering the essentially zero barrier case. My Glad example was off the cuff but it still feels apt to me for the case I mean: the barrier for an existing plastic product producer who doesn't already to also produce bags is likely very low, but it's still non zero, while the barrier for a random person is quite high. I feel vibe coding made individual projects much cheaper (sometimes zero) for decent programmers, but it hasn't made my mom start producing programming projects -- the barrier still seems quite high for non technical people. 19. I dunno about the Glad bag analogy, and now I'm not sure that the artist analogy applies either. I think a better analogy (i.e. one that we both agree one) is Excel preadsheets. There are very few "Excel consultants" available that companies hire. You can't make money be providing solutions in Excel because anyone who needs something that can be done in Excel can just do it themselves. It's like if your mum needed to sum income and expenditures for a little side-business: she won't be hiring an excel consultant to do write the formulas into the 4 - 6 cells that contain calculations, she'll simply do it herself. I think vibe coding is going to be the same way in a few years (much faster than spreadsheets took off, btw, which occurred over a period of a decade) - someone who needs a little project management applications isn't going to buy one, they can get one in an hour "for free"[1]. Just about anything you can vibe-code, an office worker with minimal training (the average person in 2026, for example) can vibe-code. The skill barrier to vibe-coding little apps like this is less than the skill barrier for creating Excel workbooks, and yet almost every office worker does it. -------------------------------------------------------------- [1] In much the same way that someone considers creating a new spreadsheet to be free when they already have Excel installed, people are going to regard the output of LLMs "free" because they are already paying the monthly fee for it. 20. You're overestimating people's willingness to write code even if they don't have to do it. Most people just don't want to do it even if AI made is easy to do so. Not sure who you're talking to but most people I know that aren't programmers have zero interest in writing their own software even if they could do it using prompts only. 21. The difference is that the head chef can cook very well and could do a better job of the dish than the trainee. 22. Yes, people who were at best average engineers and those that atrophied at their skill through lack of practice seem to be the biggest AI fanboys in my social media. It's telling, isn't it? 23. That sounds reasonable to me. AI is best at generating super basic and common code, it will have plenty of training on game templates and simple games. Obviously you cannot generalize that to all software development though. 24. As you get deeper beyond the starter and bootstrap code it definitely takes a different approach to get value. This is in part because context limits of large code bases and because the knowledge becomes more specialized and the LLM has no training on that kind of code. But people are making it work, it just isn't as black and white. 25. > That sounds reasonable to me. AI is best at generating super basic and common code I'm currently using AI (Claude Code) to write a new Lojban parser in Haskell from scratch, which is hardly something "super basic and common". It works pretty well in practice, so I don't think that assertion is valid anymore. There are certainly differences between different tasks in terms of what works better with coding agents, but it's not as simple as "super basic". 26. I recently used AI to help build the majority of a small project (database-driven website with search and admin capabilities) and I'd confidently say I was able to build it 3 to 5 times faster with AI. For context, I'm an experienced developer and know how to tweak the AI code when it's wonky and the AI can't be coerced into fixing its mistakes. 27. A year or so ago I was seriously thinking of making a series of videos showing how coding agents were just plain bad at producing code. This was based on my experience trying to get them to do very simple things (e.g. a five-pointed star, or text flowing around the edge of circle, in HTML/CSS). They still tend to fail at things like this, but I've come to realize that there are whole classes of adjacent problems they're good at, and I'm starting to leverage their strengths rather than get hung up on their weaknesses. Perhaps you're not playing to their strengths, or just haven't cracked the code for how to prompt them effectively? Prompt engineering is an art, and slight changes to prompts can make a big difference in the resulting code. 28. Perhaps it is a skill issue. But I don't really see the point of trying when it seems like the gains are marginal. If agent workflows really do start offering 2x+ level improvements then perhaps I'll switch over, in the meantime I won't have to suffer mental degradation from constant LLM usage. 29. I’m better at it in the spaces where I deliver value. For me that’s the backend, and I’m building complex backends with simple frontends. Sounds like your expertise is the front end, so you’re gonna be doing stuff that’s beyond me, and beyond what the AI was trained on. I found ways to make the AI solve backend pain points (documentation, tests, boiler plate like integrations). There’s probably spaces where the AI can make your work more productive, or, like my move into the front end, do work that you didn’t do before. 30. This isn't supposed to be a slam on LLMs. They're genuinely useful for automating a lot of menial things... It's just there's a point where we end up automating ourselves out of the equation, where we lose opportunity to learn, and earn personal fulfilment. Web dev is a soft target. It is very complex in parts, and what feels like a lot of menial boilerplate worth abstracting, but not understanding messy topics like CSS fundamentals, browser differences, form handling and accessibility means you don't know to ask your LLM for them. You have to know what you don't know before you can consciously tell an LLM to do it for you. LLMs will get better, but does that improve things or just relegated the human experience further and further away from accomplishment? 31. > Over the past two decades, I’ve worked with a lot of talented people > I’ve seen the good and the bad, and I can iterate from there. A bit of a buried lede, perhaps. Being in the industry for two decades, the definitions and fundamentals can rub off on you, with a little effort. There is a big difference between this and a decidedly non-technical individual without industry experience who sets out to do the same thing. This is not the advertised scenario for LLM vibe-coding. 32. > Over the past two decades, I’ve worked with a lot of talented people: backend developers, frontend developers, marketers, leaders, and more. I can lean on those experiences, fall back on how they did things, and implement their methods with AI. Will that really work? You interacted with the end product, but you don't have the experience and learned lessons that those people had. Are you sure this isn't the LLM reinforcing false confidence? Is the AI providing you with the real thing or a cheap imitation and how can you tell? 33. As someone who always dabbled in code but never was a “real” developer, I’ve found the same thing. I know the concepts, I know good from bad — so all of a sudden I can vibe code things that would have taken me months of studying and debugging and banging my head against the wall. If you’ll forgive a bit of self promotion, I also wrote some brief thoughts on my Adventures In AI Prototyping: https://www.andrew-turnbull.com/adventures-in-ai-prototyping... 34. Agree, I developed a 150K line stock analytics Saas that started with the will to provide my son with some tools to analyse stocks. I enjoyed this experience of CLI coding so much that I developed Market Sentiment parsing 300,000 business articles and news daily, a dividend based strategy with calendar of payouts and AI optimised strategies to extract every drop of interest, an alert system for a strategy you backtested in the playground and its key triggers are tracked automatically so you can react, an ETF risk analysis model with external factors, all quant graphs and then some, time models with Markov, candlestick patterns, Monte Carlo simulation, walk forward and other approaches I had learned over the years. There is much more. I know you don't measure a project in terms of lines of code, but these are optimised, verified, tested, debugged and deployed. There are so much features, because I was having fun and got carried away. I'm semi-retired and this is like having my web agency back again. I used to program in GRASP... I have a data scientist certification, did a lot of Python, Machine Learning, NLP, etc. I really enjoy the prompt based development process as it seems like you are reaching the right resource for your question from a staff of experienced dev. Of course you need to check everything as a junior dev always creeps in when you least expect it. Especially for security. Discuss best practices often and do your research on touchy subjects. Compare various AI on the same topic. GROK has really caught up. OpenAI has slowed down. CLAUDE is simply amazing. This AI thing is work in progress and constantly changing. I have a noticed an amazing progression over the past year. I have a feeling their models are retrained, tweaked on our interactions even if you asked for them not to use the data. The temptation is too high and the payoffs abound in this market for the best AI tools. I'm building a code factory now with agents and key checkpoints for every step. I want to remove human intervention from multiple sub steps that are time consuming so I can be even more productive in 2026... 35. its also trading one problem for another. when manually coding you understand with little mental effort what you want to achieve, the nuances and constraints, how something interacts with other moving parts, and your problem is implementing the solution when generating a solution, you need to explain in excruciating detail the things that you just know effortlessly. its a different kind of work, but its still work, and its more annoying and less rewarding than just implementing the solution yourself 36. > when generating a solution, you need to explain in excruciating detail the things that you just know effortlessly This is a great way of explaining the issue. 37. This is probably the best post i've seen about the whole LLM / vibe coding space at least in relation to web dev. Indeed, as the author states, the code / agent often needs some coralling, but if you know all the gotchyas / things to look for, you can focus 100% on the creativity part! Been loving it as well. 38. Agree with this. Like the author, I've been keeping ajour with web development for multiple decades now. If you have deep software knowledge pre-LLM, you are equipped with the intuition and knowledge to judge the output. You can tell the difference between good and bad, if it looks and works the way you want, and you can ask the relevant questions to push the solution to the actual thing that you envisioned in your mind. Without prior software dev experience people may take what the LLM gives them at face value, and that's where the slop comes from imho. 39. >>Starting a new project once felt insurmountable. Now, it feels realistic again. Honestly, this does not give me confidence in anything else you said. If you can't spin up a new project on your own in a few minutes, you may not be equipped to deal with or debug whatever AI spins up for you. >>When AI generates code, I know when it’s good and when it’s not. I’v seen the good and the bad, and I can iterate from there. Even with refinement and back-and-forth prompting, I’m easily 10x more productive Minus a baseline, it's hard to tell what this means. 10x nothing is nothing. How am I supposed to know what 1x is for you, is there a 1x site I can look at to understand what 10x would mean? My overall feeling prior to reading this was "I should hire this guy", and after reading it my overwhelming thought was "eat a dick, you sociopathic self-aggrandizing tool." Moreover, if you have skill which you feel is augmented by these tools, then you may want to lean more heavily on that skill now if you think that the tool itself makes everyone capable of writing the same amazing code you do. Because it sounds like you will be unemployed soon if not already, as a casualty of the nonsense engine you're blogging about and touting. 40. I remember when Hacker News felt smaller. Threads were shorter. Context fit in your head. You could read the linked article, skim the comments, and jump in without feeling like you’d missed a prerequisite course. It probably didn’t feel special at the time, but looking back, it was simpler. The entire conversation space was manageable. If you had a thought, you could express it clearly, hit “reply,” and reasonably expect to be understood. As a single commenter, you could hold the whole discussion in your mind. From article to argument to conclusion. Or at least, it felt that way. I’m probably romanticizing it—but you know what I mean. Now, articles are denser. Domains are deeper. Threads splinter instantly. Someone cites a paper, someone else links a counter-paper, a third person references a decades-old mailing list post, and suddenly the discussion assumes years of background you may or may not have. You’re expected to know the state of the art, the historical context, the common rebuttals, the terminology, and the unwritten norms—while also being concise, charitable, and original. Every field has matured—probably for the better—but it demands deeper domain knowledge just to participate without embarrassing yourself. Over time, I found myself backing out of threads I was genuinely interested in, not because I had nothing to say, but because the cognitive load felt too high. As a solo thinker, it became harder to keep up. > AI has entered the chat. They’re far from perfect, but tools like Claude and ChatGPT gave me something I hadn’t felt in a long time: _leverage_. I can now quickly: - Summarize long articles - Recall prior art - Check whether a take is naïve or already debunked - Clarify my own thinking before posting Suddenly, the background complexity matters a lot less. I can go from “half-formed intuition” to “coherent comment” in minutes instead of abandoning the tab entirely. I can re-enter conversations I would’ve previously skipped. > Oh no, you’re outsourcing thinking—bet it’s all slop! Over the years, I’ve read thousands of great HN comments. Thoughtful ones. Careful ones. People who knew when to hedge, when to cite, when to shut up. That pattern is in my head now. With AI, I can lean on that experience. I can sanity-check tone. I can ask, “Is this fair?” or “What am I missing?” I can stress-test an argument before I inflict it on strangers. When AI suggests something wrong, I know it’s wrong. When it’s good, I recognize why. Iteration is fast. Even with back-and-forth refinement, I’m dramatically more effective at expressing what I already think. The goal hasn’t changed: contribute something useful to the discussion. The bar is still high. But now I have a ladder instead of a sheer wall. There’s mental space for curiosity again. My head isn’t constantly overloaded with “did I miss context?”, “is this a known bad take?”, or “will this derail into pedantry?” I can offload that checking to AI and focus on the _idea_. That leaves room to explore. To ask better questions. To write comments that connect ideas instead of defensively hedging every sentence. To participate for the joy of thinking in public again. It was never about typing comments fast, or winning arguments. It was about engaging with interesting people on interesting problems. Writing was just the interface. And with today’s tools, that interface is finally lighter again. AI really has made commenting on Hacker News fun again. 41. 100% the opposite. LLMs lack high level creativity, wisdom and taste. Being a generalist is how you build these. For example, there's a common core to music, art, food, writing, etc that you don't see until you've gotten good at 3+ aesthetic fields. There are common patterns in different academic disciplines and activities that can supercharge your priors and help you make better decisions. LLMs can "see" these these connections if explicitly prompted with domains and details, but they don't seem to reason with them in mind or lean on them by default. On the other hand, LLMs are being aggressively RL'd by the top 10% of various fields, so single field expertise by some of the best in the world is 100% baked in and the default. </comments_about_topic> Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.
LLM Usage Skill Requirements # Arguments that getting value from LLMs requires skill, experience to recognize good and bad output, and knowing what questions to ask
41