Summarizer

LLM Input

llm/fa6df919-50f4-440a-804d-6a9d3e9721d8/topic-2-cf593a7b-3587-42cd-ad77-5d6e8d036472-input.json

prompt

The following is content for you to summarize. Do not respond to the comments—summarize them.

<topic>
Learning vs Efficiency Tradeoff # Tension between using AI to get things done quickly versus the value of learning through struggle, friction, and hands-on experience with tools and concepts
</topic>

<comments_about_topic>
1. For me it all the build stuff and scaffolding I have to get in place before I can even start tinkering on a project. I never formally learned all the systems and tools and AI makes all of that 10x easier. When I hit something I cannot figure out instead of googling for 1/2 hour it is 10 minutes in AI.

2. The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser.

3. Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ...

4. I feel like we are just covering whataboutism tropes now.

You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate.

And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me.

5. This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself.

6. Learning means friction, it's not going to happen any other way.

7. Some of it is friction, some of it is play. With AI you can get faster to the play part where you do learn a fair bit. But in a sense I agree that less is retained. I think that is not because of lack of friciton, instead is the fast pace of getting what you want now. You no longer need to make a conscious effort to remember any of it because it's effortless to get it again with AI if you ever need it. If that's what you mean by friction then I agree.

8. I agree! I just don’t think the friction has to come from tediously trawling through a bunch of web pages that don’t contain the answer to your question.

9. You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working.

10. Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either.

11. Necessarily, LLM output that works isn't gibberish.

The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function.

12. "Gibberish" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish

Especially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work.

Even the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand.

13. It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues.

Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs.

I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made.

14. Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it.

Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse.

15. I don't think "learning" is a goal here...

16. I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again.

17. And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something

18. Depends on what level of abstraction you're comfortable with. I have no problem driving a car I didn't build.

19. I didnt build my car either. But I understand a bit of most of the main mechanics, like how the ABS works, how powered steering does, how an ICE works and so on.

20. I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc.

21. >> The difference is that after you’ve googled it for ½ hour, you’ve learned something.

I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday .

22. Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)...

23. > The difference is whether or not you find computers interesting and enjoy understanding how they work.

I'm a stereotypical nerd, into learning for its own sake.

I can explain computers from the quantum mechanics of band gaps in semiconductors up to fudging objects into C and the basics of operating systems with pre-emptive multitasking, virtual memory, and copy-on-write as they were c. 2004.

Further up the stack it gets fuzzy (not that even these foundations are not; "basics" of OSes, I couldn't write one); e.g. SwiftUI is basically a magic box, and I find it a pain to work with as a result.

LLM output is easier to understand than SwiftUI, even if the LLM itself has much weirder things going on inside.

24. Nope, but that was the example I had in mind when I chose my phrasing :)

I think I can describe the principles at work with DNS, but not all of how IP packets are actually routed; the physics of beamforming and QAM, but none of the protocol of WiFi; the basics of error correction codes, but only the basics and they're probably out of date; the basic ideas used in private key crypto but not all of HTTPS; I'd have to look up the OSI 7-layer model to remember all the layers; I understand older UI systems (I've even written some from scratch), but I'm unsure how much of current web browsers are using system widgets vs. it all being styled HTML; interrupts as they used to be, but not necessarily as they still are; my knowledge of JavaScript is basic; and my total knowledge of how certificate signing works is the conceptual level of it being an application of public-private key cryptography.

I have e.g. absolutely no idea why Chrome is famously a memory hog, and I've never learned how anything is scheduled between cores at the OS level.

25. It depends. Sometimes they joy is in discovering what problem you are solving, by exploring the space of possibilities on features and workflows on a domain.

For that, having elegant and simple software is not needed; getting features fast to try out how they work is the basis of the pleasure, so having to write every detail by hand reduces the fun.

26. Sure, but, in the real world, for the software to deliver a solution, it doesn't really matter if something is modelled in beautiful objects and concise packages, or if it's written in one big method. So for those that are more on the making /things/ side of the spectrum, I guess they wouldn't care if the LLM outputs code that has each iteration written separately.

It's just that if you really like to work on your craftsmanship, you spend most of the time rewriting/remodelling because that's where the fun is if you're more on the /making/ things side of the spectrum, and LLMs don't really assist in that part (yet?). Maybe LLMs could be used to discuss ways to model a problem space?

27. As I've gotten more experience I've tended to find more fun in tinkering with architectures than tinkering with code. I'm currently working on making a secure zero-trust bare metal kubernetes deployment that relies on an immutable UKI and TPM remote attestation. I'm making heavy use of LLMs for the different implementation details as I experiment with the architecture. As far as I know, to the extent I'm doing anything novel, it's because it's not a reasonable approach for engineering reasons even if it technically works, but I'm learning a lot about how TPMs work and the boot process and the kernel.

I still enjoy writing code as well, but I see them as separate hobbies. LLMs can take my hand-optimized assembly drag racing or the joy of writing a well-crafted library from my cold dead hands, but that's not always what I'm trying to do and I'll gladly have an LLM write my OCI layout directory to CPIO helper or my Bazel rule for putting together a configuration file and building the kernel so that I can spend my time thinking about how the big pieces fit together and how I want to handle trust roots and cold starts.

28. This goes further into LLM usage than I prefer to go. I learn so much better when I do the research and make the plan myself that I wouldn’t let an LLM do that part even if I trusted the LLM to do a good job.

I basically don’t outsource stuff to an LLM unless I know roughly what to expect the LLM output to look like and I’m just saving myself a bunch of typing.

“Could you make me a Go module with an API similar to archive/tar.Writer that produces a CPIO archive in the newcx format?” was an example from this project.

29. Yeah, this is a lot of what I'm doing with LLM code generation these days: I've been there, I've done that, I vaguely know what the right code would look like when I see it. Rather than spend 30-60 minutes refreshing myself to swap the context back into my head, I prompt Claude to generate a thing that I know can be done.

Much of the time, it generates basically what I would have written, but faster. Sometimes, better, because it has no concept of boredom or impatience while it produces exhaustive tests or fixes style problems. I review, test, demand refinements, and tweak a few things myself. By the end, I have a working thing and I've gotten a refresher on things anyway.

30. LLMs are really showing how different programmers are from one another

i am in your camp, i get 0 satisfaction out of seeing something appear on the screen which i don't deeply understand

i want to feel the computer as i type, i've recently been toying with turning off syntax highlighting and LSPs (not for everyone), and i am surprised at the lack of distractions and feeling of craft and joy it brings me

31. I think it just depends on the person or the type of project. If I'm learning something or building a hobby project, I'll usually just use an autocomplete agent and leave Claude Code at work. On the other hand, if I want to build something that I actually need, I may lean on AI assistants more because I'm more interested in the end product. There are certain tasks as well that I just don't need to do by hand, like typing an existing SQL schema into an ORM's DSL.

32. Historically, tinkerers had to stay within an extremely limited scope of what they know well enough to enjoy working on.

AI changes that. If someone wants to code in a new area, it's 10000000x easier to get started.

What if the # of handwritten lines of code is actually increasing with AI usage?

33. It's a little shameful but I still struggle when centering divs on a page. Yes, I know about flexbox for more than a decade but always have to search to remember how it is done.

So instead of refreshing that less used knowledge I just ask the AI to do it for me. The implications of this vs searching MDN Docs is another conversation to have.

34. Hah, centering divs with flexbox is one of my uses for this too! I can never remember the syntax off the top of my head, but if I say "center it with flexbox" it spits out exactly the right code every time.

If I do this a few more times it might even stick in my head.

35. Search “centre a div” in Google

Wade through ads

Skim a treatise on the history of centering content

Skim over the “this question is off topic / duplicate” noise if Stack Overflow

Find some code on the page

Try to map how that code will work in the context of your other layout

Realize it’s plain CSS and you’re looking for Tailwind

Keep searching

Try some stuff until it works

Or…

Ask LLM. Wait 20-30 seconds. Move on to the next thing.

36. The middle step is asking an LLM how it's done and making the change yourself. You skip the web junk and learn how it's done for next time.

37. Yep, that’s not a bad approach, either.

I did that a lot initially, it’s really only with the advent of Claude Code integrated with VS Code that I’m learning more like I would learn from a code review.

It also depends on the project. Work code gets a lot more scrutiny than side projects, for example.

38. > Search “centre a div” in Google

Aaand done. Very first result was a blog post showing all the different ways to do it, old and new, without any preamble.

39. Or, given that OP is presumably a developer who just doesn't focus fully on front end code they could skip straight to checking MDN for "center div" and get a How To article ( https://developer.mozilla.org/en-US/docs/Web/CSS/How_to/Layo... ) as the first result without relying on spicy autocomplete.

Given how often people acknowledge that ai slop needs to be verified, it seems like a shitty way to achieve something like this vs just checking it yourself with well known good reference material.

40. If only it were that easy. I got really good at centering and aligning stuff, but only when the application is constructed in the way I expect. This is usually not a problem as I'm usually working on something I built myself, but if I need to make a tweak to something I didn't build, I frequently find myself frustrated and irritated, especially when there is some higher or lower level that is overriding the setting I just added.

As a bonus, I pay attention to what the AI did and its results, and I have actually learned quite a bit about how to do this myself even without AI assistance

41. Another anecdote: I built my first Android app in less than a dozen hours over the holiday, tailored for a specific need I have. I do have many years of experience with Java, C# and JS (Angular), but have never coded anything for mobile. Gemini helped me figure out how to set up a Kotlin app with a reasonable architecture (Hilt for dependency injection, etc). It also helped me find Material3 components and set up the UI in a way that looks not too bad, especially considering my lack of design skills. The whole project was a real joy to do, and I have a couple of more ideas that I'm going to implement over the coming months.

As a father of three with a busy life, this would've simply been impossible a couple of years ago.

42. The good thing about AI is that it knows all the hundreds of little libraries that keep popping up every few days like a never-ending stream. No longer I need to worry about learning about this stuff, I can just ask the AI what libraries to use for something and it will bring up these dependencies and provide sample code to use them. I don't like AI for coding real algorithms, but I love the fact that I don't need to worry about the myriad of libraries that you had to keep up with until recently.

43. Definitely. I'm not disparaging the process of assembling IKEA furniture, nor the process of producing software using LLMs. I've done both, and they have their time and place.

What I'm pushing back on is the idea that these are equivalent to carpentry and programming. I think we need new terminology to describe this new process. "Vibe coding" is at the extreme end of it, and "LLM-assisted software development" is a mouthful.

Although, the IKEA analogy could be more accurate: the assembly instructions can be wrong; some screws may be missing; you ordered an office chair and got a dining chair; a desk may have five legs; etc. Also, the thing you built is made out of hollow MDF, and will collapse under moderate levels of stress. And if you don't have prior experience building furniture, you end up with no usable skills to modify the end result beyond the manufacturer's original specifications.

So, sure, the seemingly quick and easy process might be convenient when it works. Though I've found that it often requires more time and effort to produce what I want, and I end up with a lackluster product, and no learned skills to show for it. Thus learning the difficult process is a more rewarding long-term investment if you plan to continue building software or furniture in the future. :)

44. What do LLM's have to do returning to coding?

Just...

...write the code. Stop being lazy.

45. I'm sure there is plenty of language parsers written in Haskell in the training data. Regardless, the question isn't if LLMs can generate code (they clearly can), it's if agentic workflows are superior to writing code by hand.

46. One concern is those less experienced engineers might never become experienced if they’re using AI from the start. Not that everyone needs to be good at coding. But I wonder what new grads are like these days. I suspect few people can fight the temptation to make their lives a little easier and skip learning some lessons.

47. - Providing boilerplate/template code for common use cases
- Explaining what code is doing and how it works
- Refactoring/updating code when given specific requirements
- Providing alternative ways of doing things that you might not have thought of yourself

YMMV; every project is different so you might not have occasion to use all of these at the same time.

48. I enjoy when:
Things are simple.
Things are a complicated, but I can learn something useful.

I do not enjoy when:
Things are arbitrarily complicated.
Things are a complicated, but I'm just using AI to blindly get something done instead of learning.
Things are arbitrarily complicated and not incentivized to improve because now "everyone can just use AI"

It feels like instead of all stepping back and saying "we need to simplify things" we've doubled down on abstraction _again_

49. This isn't supposed to be a slam on LLMs. They're genuinely useful for automating a lot of menial things... It's just there's a point where we end up automating ourselves out of the equation, where we lose opportunity to learn, and earn personal fulfilment.

Web dev is a soft target. It is very complex in parts, and what feels like a lot of menial boilerplate worth abstracting, but not understanding messy topics like CSS fundamentals, browser differences, form handling and accessibility means you don't know to ask your LLM for them.

You have to know what you don't know before you can consciously tell an LLM to do it for you.

LLMs will get better, but does that improve things or just relegated the human experience further and further away from accomplishment?

50. Maybe its just me but I enjoy learning how all these systems work. Vibe Coding and LLMs basically take that away from me, so I dont think ill ever be as hyped for AI as other coders

51. Meanwhile, I've been feeling the fun of development sucked away by LLMs. I recently started doing some coding problems where I intentionally turned off all LLM assistance, and THAT was fun.

Although I'll be happy to use LLMs for nightmare stuff like dependency management. So I guess it's about figuring out which part of development you enjoy and which part drains you, and refusing to let it take the former from you.

52. I really agree with this. For me it just feel so much more fun and rewarding to build my weekend projects, especially those projects where I just want to produce and deploy a working mvp out of an idea. If trying out a new framework or whatever I find it quite the opposite though, that AI removes all the fun parts of learning (obviously)

53. Of course its fun. Making slop _is_ very fun. Its a low-effort dopamine-driven way of producing things. Learning is uncomfortable. Improving things using only your braincells can be very difficult and time consuming.

54. I have learned more - not just about my daily driver languages, but about other languages I wouldn't have even cracked the seal on, as well as layers of hardware and maker skills - in the past two years than I did in the 30 years leading up to them.

I truly don't understand how anyone creative wouldn't find their productivity soar using these tools. If computers are bicycles for the mind, LLMs are powered exoskeletons with neural-controlled turret cannons.

55. To extend the metaphor, which provides better exercise for your body? A bicycle or a powered exoskeleton with turret cannons?

56. I don't bike for exercise. I bike to get where I'm going with the least amount of friction. Different tools for different jobs.

Also: I think we can agree that Ripley was getting a good workout.

57. The rate at which I'm learning new skills has accelerated thanks to LLMs.

Not learning anything while you use them is a choice. You can choose differently!

58. How are you using AI to learn? I see a lot of people say this but simply reading AI generated overviews or asking it questions isn't really learning.

59. I'm using it to build things.

Here's an example from the other day. I've always been curious about writing custom Python C extensions but I've never been brave enough to really try and do it.

I decided it would be interesting to dig into that by having Codex build a C extension for Python that exposed simple SQLite queries with a timeout.

It wrote me this: https://github.com/simonw/research/blob/main/sqlite-time-lim... - here's the shared transcript: https://chatgpt.com/s/cd_6958a2f131a081918ed810832f7437a2

I read the code it produced and ran it on my computer to see it work.

What did I learn?

- Codex can write, compile and test C extensions for Python now

- The sqlite3_progress_handler mechanism I've been hooking into for SQLite time limits in my Python code works in C too, and appears to be the recommended way to solve this

- How to use PyTuple_New(size) in C and then populate that tuple

- What the SQLite C API for running a query and then iterating though the results looks like, including the various SQLITE_INTEGER style constants for column types

- The "goto cleanup;" pattern for cleaning up on errors, including releasing resources and calling DECREF for the Python reference counter

- That a simple Python extension can be done with ~150 lines of readable and surprisingly non-threatening C

- How to use a setup.py and pyproject.toml function together to configure a Python package that compiles an extension

Would I have learned more if I had spent realistically a couple of days figuring out enough C and CPython and SQLite and setup.py trivia to do this without LLM help? Yes. But I don't have two days to spend on this flight of curiosity, so actually I would have learned nothing.

The LLM project took me ~1 minutes to prompt and then 15 minutes to consume the lessons at the end. And I can do dozens of this kind of thing a day, in between my other work!

60. With all due respect you were reading, not learning. It's like when people watch educational YouTube videos as entertainment, it feels like they're learning but they aren't.

It's fine to use the LLMs in the same way that people watch science YouTube content, but maybe don't frame it like it's for learning. It can be great entertainment tho.

61. Disagree, it can be learning as long as you build out your mental model while reading. Having educational reading material for the exact thing you're working on is amazing at least for those with interest-driven brains.

Science YouTube is no comparison at all: while one can choose what to watcha, it's a limited menu that's produced for a mass audience.

I agree though that reading LLM-produced blog posts (which many of the recent top submissions here seem to be) is boring.

62. The YouTube analogy doesn't completely hold.

It's more like jumping on a Zoom screen sharing session with someone who knows what they're doing, asking for a tailored example and then bouncing as many questions as you like off them to help understand what they did.

There's an interesting relevant concept in pedagogy called the "Worked example effect", https://en.wikipedia.org/wiki/Worked-example_effect - it suggests that showing people "worked examples" can be more effective than making them solve the problem themselves.

63. Ok but you didn't ask any questions in the transcript you provided. Maybe that one was an outlier?

In order to learn you generally need to actually do the thing, and usually multiple times. My point is that it's easy to use an AI to shortcut that part, with a healthy dose of sycophancy to make you feel like you learned so well.
</comments_about_topic>

Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.

topic

Learning vs Efficiency Tradeoff # Tension between using AI to get things done quickly versus the value of learning through struggle, friction, and hands-on experience with tools and concepts

commentCount

63

← Back to job