llm/e6f7e516-f0a0-4424-8f8f-157aae85c74e/topic-6-b8868425-5ac7-41e3-90a8-2049def7967f-input.json
The following is content for you to summarize. Do not respond to the comments—summarize them. <topic> Impact on Skill and Learning # Concerns about the long-term effects on human expertise. Topics include "skill atrophy" where juniors bypass learning fundamentals, the educational crisis evidenced by Chegg's collapse, and the difficulty of debugging AI code without deep institutional knowledge or "muscle memory" of the system. </topic> <comments_about_topic> 1. I’m strictly talking about “Agentic” coding here: They are not a silver bullet or truly “you don’t need to know how to code anymore” tools. I’ve done a ton of work with Claude code this year. I’ve gone from a “maybe one ticket a week” tier React developer to someone who’s shipped entire new frontend feature sets, while also managing a team. I’ve used LLM to prototype these features rapidly and tear down the barrier to entry on a lot of simple problems that are historically too big to be a single-dev item, and clear out the backlog of “nice to haves” that compete with the real meat and bread of my business. This prototyping and “good enough” development has been massively impactful in my small org, where the hard problems come from complex interactions between distributed systems, monitoring across services, and lots of low-level machine traffic. LLM’s let me solve easy problems and spend my most productive hours working with people to break down the hard problems into easy problems that I can solve later or pass off to someone on my team to help. I’ve also used LLM to get into other people’s codebases, refactor ancient tech debt, shore up test suites from years ago that are filled with garbage and copy/paste. On testing alone, LLM are super valuable for throwing edge cases at your code and seeing what you assumed vs. what an entropy machine would throw at it. LLM absolutely are not a 10x improvement in productivity on their own. They 100% cannot solve some problems in a sensible, tractable way, and they frequently do stupid things that waste time and would ruin a poor developer’s attempts at software engineering. However, they absolutely also lower the barrier to entry and dethrone “pure single tech” (ie backend only, frontend only, “I don’t know Kubernetes”, or other limited scope) software engineers who’ve previously benefited from super specialized knowledge guarding their place in the business. Software as a discipline has shifted so far from “build functional, safe systems that solve problems” to “I make 200k bike shedding JIRA tickets that require an army of product people to come up with and manage” that LLM can be valuable if only for their capabilities to role-compress and give people with a sense of ownership the tools they need to operate like a whole team would 10 years ago. 2. > However, they absolutely also lower the barrier to entry and dethrone “pure single tech” (ie backend only, frontend only, “I don’t know Kubernetes”, or other limited scope) software engineers who’ve previously benefited from super specialized knowledge guarding their place in the business. This argument gets repeated frequently, but to me it seems to be missing final, actionable conclusion. If one "doesn't know Kubernetes", what exactly are they supposed to do now, having LLM at hand, in a professional setting? They still "can't" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading. Assuming we are not expecting people to operate with implicit delegation of responsibility to the LLM (something that is ultimately not possible anyway - taking blame is a privilege human will keep for a foreseeable future), I guess the argument in the form as above collapses to "it's easier to learn new things now"? But this does not eliminate (or reduce) a need for specialization of knowledge on the employee side, and there is only so much you can specialize in. The bottleneck maybe shifted right somewhat (from time/effort of the learning stage to the cognition and the memory limits of an individual), but the output on the other side of the funnel (of learn->understand->operate->take-responsibility-for) didn't necessary widen that much, one could argue. 3. I don’t think the conclusion is right. Your org might still require enough React knowledge to keep you gainfully employed as a pure React dev but if all you did was changing some forms, this is now something pretty much anyone can do. The value of good FE architecture increased if anything since you will be adding code quicker. Making sure the LLM doesn’t stupidly couple stuff together is quite important for long term success 4. >They still "can't" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading. Wasn't this a problem before AI? If I took a book or online tutorial and followed it, could I be sure it was teaching me the right thing? I would need to make sure I understood it, that it made sense, that it worked when I changed things around, and would need to combine multiple sources. That still needs to be done. You can ask the model, and you'll have the judge the answer, same as if you asked another human. You have to make sure you are in a realm where you are learning, but aren't so far out that you can easily be misled. You do need to test out explanations and seek multiple sources, of which AI is only one. An AI can hallucinate and just make things up, but the chance it different sessions with different AIs lead to the same hallucinations that consistently build upon each other is unlikely enough to not be worth worrying about. 5. I follow at least one GitHub repo (a well respected one that's made the HN front page), and where everything is now Claude coded. Things do move fast, but I'm seriously under impressed with the quality. I've raised a few concerns, some were taken in, others seem to have been shut down with an explanation Claude produced that IMO makes no sense, but which is taken at face value. This matches my personal experience. I was asked to help with a large Swift iOS app without knowing Swift. Had access to a frontier agent. I was able to consistently knock a couple of tickets per week for about a month until the fire was out and the actual team could take over. Code review by the owners means the result isn't terrible, but it's not great either. I leave the experience none the wiser: gained very little knowledge of Swift, iOS development or the project. Management was happy with the productivity boost. I think it's fleeting and dread a time where most code is produced that way, with the humans accumulating very little institutional knowledge and not knowing enough to properly review things. 6. > Longer term, he was also quite optimistic on its ability to cut out roles like radiologists, instead having a software program interpret the images and write a report to send to a consultant. As a medical imaging tech, I think this is a terrible idea. At least for the test I perform, a lot of redundancy and double-checking is necessary because results can easily be misleading without a diligent tech or critical-thinking on the part of the reading physician. For instance, imaging at slightly the wrong angle can make a normal image look like pathology, or vice versa. Maybe other tests are simpler than mine, but I doubt it. If you've ever asked an AI a question about your field of expertise and been amazed at the nonsense it spouts, why would you trust it to read your medical tests? > Since the consultant already checks the report against any images, the AI being more sensitive to potential issues is a positive thing: giving him the power to discard erroneous results rather than potentially miss something more malign. Unless they had the exact same schooling as the radiologist, I wouldn't trust the consultant to interpret my test, even if paired with an AI. There's a reason this is a whole specialized field -- because it's not as simple as interpreting an EKG. 7. I had some .csproj files that only worked with msbuild/vsbuild that I wanted to make compatible with dotnet. Copilot does a pretty good job of updating these and identifying the ones more likely to break (say web projects compared to plain dlls). It isn't a simple fire and forget, but it did make it possible without me needing to do as much research into what was changing. Is that a net benefit? Without AI, if I really wanted to do that conversion, I would have had to become much more familiar with the inner workings of csproj files. That is a benefit I've lost, but it would've also taken longer to do so, so much time I might not have decided to do the conversion. My job doesn't really have a need for someone that deeply specialized in csproj, and it isn't a particular interest of mine, so letting AI handle it while being able to answer a few questions to sate my curiosity seemed a great compromise. A second example, it works great as a better option to a rubber duck. I noticed some messy programming where, basically, OOP had been abandoned in favor of one massive class doing far too much work. I needed to break it down, and talking with AI about it helped come up with some design patterns that worked well. AI wasn't good enough to do the refactoring in one go, but it helped talk through the pros and cons of a few design pattern and was able to create test examples so I could get a feel for what it would look like when done. Also, when I finished, I had AI review it and it caught a few typos that weren't compile errors before I even got to the point of testing it. None of these were things AI could do on their own, and definitely aren't areas I would have just blindly trusted some vibe coded output, but overall it was productivity increase well worth the $20 or so cost. (Now, one may argue that is the subsidized cost, and the unsubsidized cost would not have been worthwhile. To that, I can only say I'm not versed enough on the costs to be sure, but the argument does seem like a possibility.) 8. > double your productivity Churning out 2x as much code is not doubling productivity. Can you perform at the same level as a dev who is considered 2x as productive as you? That's the real metric. Comparing quality to quantity of code ratios, bugs caused by your PRs, actual understanding of the code in your PR, ability to think slow, ability to deal with fires, ability to quickly deal with breaking changes accidentally caused by your changes. Churning out more more per day is not the goal. No point merging code that either doesn't fully work, is not properly tested, other humans (or you) cannot understand, etc. 9. You discount the value of being intimately familiar with each line of code, the design decisions and tradeoffs because one wrote the bloody thing. It is negative value for me to have a mediocre machine do that job for me, that I will still have to maintain, yet I will have learned absolutely nothing from the experience of building it. 10. This to me seems like saying you can learn nothing from a book unless you yourself have written it. You can read the code the LLM writes the same as you can read the code your colleagues write. Moreover you have to pretty explicitly tell it what to write for it to be very useful. You're still designing what it's doing you just don't have to write every line. 11. I think it depends on what "join" means. I see no reason why it has to be "replace a human". People used to have secretaries back in the day, we don't anymore, we all do our own thing, but in a way, LLMs are our secretaries of sorts now. Or our personal executive assitants, even if you're not an executive. I don't know what else LLMs need to do? get on the payroll? People are using them heavily. You can't even google things easily without triggering an LLM response. I think the current millenial and older generation is too used to the pre-LLM way of things, so the resistance will be there for a long time to come. but kids doing homeworks with LLMs will rely on them heavily once they're in the work force. I don't know how people are not as fascinated and excited about this. I keep watching older scifi content, and LLMs are now doing for us what "futuristic computer persona" did in older scifi. Easy example: You no longer need copywriters because of LLMs. You had spell/grammar checkers before, but they didn't "understand" context and recommend different phrasing, and check for things like continuity and rambling on. 12. Cal Newport looked in the wrong places. He has no visibility into the usage of ChatGPT to do homework. The collapse of Chegg should tell you, with no other public information, that if 30% of students were already cheating somehow, somewhat weakly, they are now doing super-powerful cheating, and surely more than 30% of students at this stage. It’s also kind of stupid to hand wave away, programming. Programmers are where all the early adopters of software are. He’s merely conflating an adoption curve with capabilities. Programmers, I’m sure, were also the first to use Google and smartphones. “It doesn’t work for me” is missing the critical word “yet” at the end, and really, is it saying much that forecasts about adoption in the metric, “years until when Cal Newport’s arbitrary criteria of what agent and adoption means meets some threshold only inside Cal Newport’s head” is hard to do? There are 700m active weeklies for ChatGPT. It has joined the workforce! It just isn’t being paid the salaries. 13. Wow, homework is an insane example of a "workforce." Homework is in some ways the opposite of actual economic labor. Students pay to attend school, and homework is (theoretically) part of that education; something designed to help students learn more effectively. They are most certainly not paid for it. Having a LLM do that "work" is economically insane. The desired learning does not happen, and the labor of grading and giving feedback is entirely wasted. Students use ChatGPT for it because of perverse incentives of the educational system. It has no bearing on economic production of value. 14. Importantly, the _reason_ that ChatGPT is good at this kind of homework, is that the homework is _intended_ to be toil. That's how we learn- through doing things, and through repetition. The problem set or paper you turn in is not the product. The product is the learning that the human obtains from the _process_. The homework is just there, being graded, to evaluate your progress at performing the required toil. 15. > The homework is just there, being graded, to evaluate your progress at performing the required toil. There’s the problem, some students don’t want an education, they just want a qualification, even if it means cheating on the evaluation. 16. But, taking a step back, assigning homework in the first place is economically insane. What's the point? Who ever actually learnt anything from homework? 17. That's jump if you are a junior. It falls down hard for the seniors doing more complex stuff. I'm also reminding that we tried whole "make it look like human language" with COBOL and it turned out that language wasn't a bottleneck, the ability of people to specify exactly what they want was the bottleneck. Once you have exact spec, even writing code on your own isn't all that hard but extracting that from stakeolders have always been the harder part of the programming. 18. In some companies, “one of your coworkers” have the skills to create & improve upon AI models themselves. Honestly at staff level I’d expect you to be able to do a literature review and start proposing architecture improvements within a year. Charitably, it just sounds like you aren’t in tech. </comments_about_topic> Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.
Impact on Skill and Learning # Concerns about the long-term effects on human expertise. Topics include "skill atrophy" where juniors bypass learning fundamentals, the educational crisis evidenced by Chegg's collapse, and the difficulty of debugging AI code without deep institutional knowledge or "muscle memory" of the system.
18