Summarizer

LLM Input

llm/5888b8dc-b96e-4444-9c3c-465dde409e92/topic-7-ad8eb0b4-ec07-45d4-a13b-e997a504e030-input.json

prompt

You are a comment summarizer. Given a topic and a list of comments tagged with that topic, write a single paragraph summarizing the key points and perspectives expressed in the comments.

TOPIC: Learning while using LLMs

COMMENTS:
1. For me it all the build stuff and scaffolding I have to get in place before I can even start tinkering on a project. I never formally learned all the systems and tools and AI makes all of that 10x easier. When I hit something I cannot figure out instead of googling for 1/2 hour it is 10 minutes in AI.

2. The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser.

3. Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ...

4. Knowing how to use LLMs is a skill. Just winging it without any practice or exploration of how the tool fails can produce poor results.

5. As long as what it says is reliable and not made up.

6. This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself.

7. Learning means friction, it's not going to happen any other way.

8. Some of it is friction, some of it is play. With AI you can get faster to the play part where you do learn a fair bit. But in a sense I agree that less is retained. I think that is not because of lack of friciton, instead is the fast pace of getting what you want now. You no longer need to make a conscious effort to remember any of it because it's effortless to get it again with AI if you ever need it. If that's what you mean by friction then I agree.

9. Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it.

Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse.

10. You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working.

11. Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either.

12. Necessarily, LLM output that works isn't gibberish.

The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function.

13. "Gibberish" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish

Especially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work.

Even the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand.

14. It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues.

Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs.

I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made.

15. I don't think "learning" is a goal here...

16. I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again.

17. And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something

18. I didnt build my car either. But I understand a bit of most of the main mechanics, like how the ABS works, how powered steering does, how an ICE works and so on.

19. I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc.

20. >> The difference is that after you’ve googled it for ½ hour, you’ve learned something.

I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday .

21. Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)...

22. > The difference is whether or not you find computers interesting and enjoy understanding how they work.

I'm a stereotypical nerd, into learning for its own sake.

I can explain computers from the quantum mechanics of band gaps in semiconductors up to fudging objects into C and the basics of operating systems with pre-emptive multitasking, virtual memory, and copy-on-write as they were c. 2004.

Further up the stack it gets fuzzy (not that even these foundations are not; "basics" of OSes, I couldn't write one); e.g. SwiftUI is basically a magic box, and I find it a pain to work with as a result.

LLM output is easier to understand than SwiftUI, even if the LLM itself has much weirder things going on inside.

23. Fair enough. But that particular could be anything that has been bothering you but you didn’t have the time or expertise to fix yourself.

I wanted that fixed, and I had given up on ever seeing it fixed. Suddenly, in only two hours, I had it fixed. And I learned a lot in the process, too!

24. Google, Facebook, Amazon, Microsoft....they literally all have the vibe coded code; it's not about vibe coded or not, it is about how well the code is designed, efficient and bug free. Ofc pro coders can debug it and fix it better than some amateur coder but still LLMs are so valuable. I let Gemini vibe code little web projects for me and it serves me well. Although you have to explain everything step by step to it and sometimes when it fixes one bug, it accidently introduces another. But we fix bugs together and learn together. And btw when Gemini fixes bugs, it puts comments in the code on how the particular bug was fixed.

25. Sure, but, in the real world, for the software to deliver a solution, it doesn't really matter if something is modelled in beautiful objects and concise packages, or if it's written in one big method. So for those that are more on the making /things/ side of the spectrum, I guess they wouldn't care if the LLM outputs code that has each iteration written separately.

It's just that if you really like to work on your craftsmanship, you spend most of the time rewriting/remodelling because that's where the fun is if you're more on the /making/ things side of the spectrum, and LLMs don't really assist in that part (yet?). Maybe LLMs could be used to discuss ways to model a problem space?

26. As I've gotten more experience I've tended to find more fun in tinkering with architectures than tinkering with code. I'm currently working on making a secure zero-trust bare metal kubernetes deployment that relies on an immutable UKI and TPM remote attestation. I'm making heavy use of LLMs for the different implementation details as I experiment with the architecture. As far as I know, to the extent I'm doing anything novel, it's because it's not a reasonable approach for engineering reasons even if it technically works, but I'm learning a lot about how TPMs work and the boot process and the kernel.

I still enjoy writing code as well, but I see them as separate hobbies. LLMs can take my hand-optimized assembly drag racing or the joy of writing a well-crafted library from my cold dead hands, but that's not always what I'm trying to do and I'll gladly have an LLM write my OCI layout directory to CPIO helper or my Bazel rule for putting together a configuration file and building the kernel so that I can spend my time thinking about how the big pieces fit together and how I want to handle trust roots and cold starts.

27. So much this. The act of having the agent create a research report first, a detailed plan second, then maybe implement it is itself fun and enjoyable. The implementation is the tedious part these days, the pie in the sky research and planning is the fun part and the agent is a font of knowledge especially when it comes to integrating 3 or 4 languages together.

28. This goes further into LLM usage than I prefer to go. I learn so much better when I do the research and make the plan myself that I wouldn’t let an LLM do that part even if I trusted the LLM to do a good job.

I basically don’t outsource stuff to an LLM unless I know roughly what to expect the LLM output to look like and I’m just saving myself a bunch of typing.

“Could you make me a Go module with an API similar to archive/tar.Writer that produces a CPIO archive in the newcx format?” was an example from this project.

29. Ultimately it's up to the user to decide what to do with his time ; it's still a good bargain that leaves a lot of sovereignty to the user. I like to code a little too much ; got into deep tech to capacities I couldn't imagine before - but at some point you hit rock bottom and you gotta ship something that makes sense. I'm like a really technical "predator" - in a sense where to be honest with myself - it has almost become some way of consumption rather than pure problem solving. For very passionate people it can be difficult to be draw the line between pleasure and work - especially given that we just do what we like in the first place - so all that time feel robbed from us - and from the standpoint of "shipper" who didn't care about it in the first place it feels like freedom.

But I'd argue that if anyone wants to jump into technical stuff ; it has never been so openly accessible - you could join some niche slack where some competent programmers were doing great stuff. Today a solo junior can ship you a key-val that is going to be fighting redis in benchmarks.

It really is not a time to slack down in my opinion - everything feels already existing and mostly already dealt with. But again - for those who are frustrated with the status-quo ; they will always find something to do.

I get you however that this has created a very different space where past acquired skill-sets don't necessarily translate as well today - maybe it's just going to be different to find it's space than it was 10 years ago.

I like that the cards have be re-dealt though - it's arguably way more open than the stack-overflow era and pre-ai where knowledge was much more difficult to create.

30. I have nearly two decades of programming experience which is mostly server side. The other day I wanted a quick desktop (Linux) program to chat with an LLM. Found out about Viciane launcher, then chalked out an extension in react (which I have never used) to chat with an LLM using OpenAI compatible API. Antigravity wrote a bare minimum working extension in a single prompt. I didn't even need to research how to write an extension for an app released only three to five months ago. I then used AI assistance to add more features and polish the UI.

This was a fun weekend but I would have procrastinated forever without a coding agent.

31. I think it just depends on the person or the type of project. If I'm learning something or building a hobby project, I'll usually just use an autocomplete agent and leave Claude Code at work. On the other hand, if I want to build something that I actually need, I may lean on AI assistants more because I'm more interested in the end product. There are certain tasks as well that I just don't need to do by hand, like typing an existing SQL schema into an ORM's DSL.

32. Historically, tinkerers had to stay within an extremely limited scope of what they know well enough to enjoy working on.

AI changes that. If someone wants to code in a new area, it's 10000000x easier to get started.

What if the # of handwritten lines of code is actually increasing with AI usage?

33. I enjoy noodling around with pointers and unsafe code in Rust. Claude wrote all the documentation, to Rust standards, with nice examples for every method.

I decided to write an app in Rust with a React UI, and Claude wrote almost all the typescript for me.

So I’ve used Claude at both ends of the spectrum. I had way more fun in every situation.

AI is, fortunately, very bad at the things I find fun, at least for now, and very good at the things I find booooring (read in Scot Pilgrim voice).

34. Look, yeah one shotting stuff makes generic UIs, impressive feat but generic

its getting years of sideprojects off the ground for me

now in languages I never learned or got professional validation for: rust, lua for roblox … in 2 parallel terminal windows and Claude Code instances

all while I get to push frontend development further and more meticulously in a 3rd. UX heavy design with SVG animations? I can do that now, thats fun for me

I can make experiences that I would never spend a business Quarter on, I can rapidly iterate in designs in a way I would never pay a Fiverr contractor or three for

for me the main skill is knowing what I want, and its entirely questionable about whether that’s a moat at all but for now it is because all those “no code” seeking product managers and ideas guys are just enamored that they can make a generic something compile

I know when to point out the AI contradicted itself in a code concept, when to interrupt when its about to go off the rails

So far so great and my backend deployment proficiency has gone from CRUD-app only to replicating, understanding and superpassing what the veteran backend devs on my teams could do

I would previously call myself full stack, but knowing where my limits in understanding are

35. Hah, centering divs with flexbox is one of my uses for this too! I can never remember the syntax off the top of my head, but if I say "center it with flexbox" it spits out exactly the right code every time.

If I do this a few more times it might even stick in my head.

36. The middle step is asking an LLM how it's done and making the change yourself. You skip the web junk and learn how it's done for next time.

37. Yep, that’s not a bad approach, either.

I did that a lot initially, it’s really only with the advent of Claude Code integrated with VS Code that I’m learning more like I would learn from a code review.

It also depends on the project. Work code gets a lot more scrutiny than side projects, for example.

38. If only it were that easy. I got really good at centering and aligning stuff, but only when the application is constructed in the way I expect. This is usually not a problem as I'm usually working on something I built myself, but if I need to make a tweak to something I didn't build, I frequently find myself frustrated and irritated, especially when there is some higher or lower level that is overriding the setting I just added.

As a bonus, I pay attention to what the AI did and its results, and I have actually learned quite a bit about how to do this myself even without AI assistance

39. I'll also argue that level of skill depends on what one can make in those two days... it's like a mirror. If you don't know what to ask for, it doesn't know what to produce

40. I experienced the exact same thing: I needed a web tool, and as far as I could tell from recent reviews, the offerings in the chrome extension store seemed either a little suspicious or broken, so I made my own extension in a little under an hour.

It used recent APIs and patterns that I didn't have to go read extensive docs for or do deep learning on. It has an acceptable test suite. The code was easy to read, and reasonable, and I know no one will ever flip it into ad-serving malware by surprise.

A big thing is just that the idea of creating a non-trivial tool is suddenly a valid answer to the question. Previously, I know would have had to spend a bunch of time reading docs, finding examples, etc., let alone the inevitable farting around with a minor side-quest because something wasn't working, or rethinking+reworking some design decision that on the whole wasn't that important. Instead, something popped into existence, mostly worked, and I could review and tweak it.

It's a little bit like jumping from a problem of "solve a polynomial" to one of "verify a solution for a polynomial".

41. Definitely. I'm not disparaging the process of assembling IKEA furniture, nor the process of producing software using LLMs. I've done both, and they have their time and place.

What I'm pushing back on is the idea that these are equivalent to carpentry and programming. I think we need new terminology to describe this new process. "Vibe coding" is at the extreme end of it, and "LLM-assisted software development" is a mouthful.

Although, the IKEA analogy could be more accurate: the assembly instructions can be wrong; some screws may be missing; you ordered an office chair and got a dining chair; a desk may have five legs; etc. Also, the thing you built is made out of hollow MDF, and will collapse under moderate levels of stress. And if you don't have prior experience building furniture, you end up with no usable skills to modify the end result beyond the manufacturer's original specifications.

So, sure, the seemingly quick and easy process might be convenient when it works. Though I've found that it often requires more time and effort to produce what I want, and I end up with a lackluster product, and no learned skills to show for it. Thus learning the difficult process is a more rewarding long-term investment if you plan to continue building software or furniture in the future. :)

42. One concern is those less experienced engineers might never become experienced if they’re using AI from the start. Not that everyone needs to be good at coding. But I wonder what new grads are like these days. I suspect few people can fight the temptation to make their lives a little easier and skip learning some lessons.

43. - Providing boilerplate/template code for common use cases
- Explaining what code is doing and how it works
- Refactoring/updating code when given specific requirements
- Providing alternative ways of doing things that you might not have thought of yourself

YMMV; every project is different so you might not have occasion to use all of these at the same time.

44. I enjoy when:
Things are simple.
Things are a complicated, but I can learn something useful.

I do not enjoy when:
Things are arbitrarily complicated.
Things are a complicated, but I'm just using AI to blindly get something done instead of learning.
Things are arbitrarily complicated and not incentivized to improve because now "everyone can just use AI"

It feels like instead of all stepping back and saying "we need to simplify things" we've doubled down on abstraction _again_

45. This isn't supposed to be a slam on LLMs. They're genuinely useful for automating a lot of menial things... It's just there's a point where we end up automating ourselves out of the equation, where we lose opportunity to learn, and earn personal fulfilment.

Web dev is a soft target. It is very complex in parts, and what feels like a lot of menial boilerplate worth abstracting, but not understanding messy topics like CSS fundamentals, browser differences, form handling and accessibility means you don't know to ask your LLM for them.

You have to know what you don't know before you can consciously tell an LLM to do it for you.

LLMs will get better, but does that improve things or just relegated the human experience further and further away from accomplishment?

46. Maybe its just me but I enjoy learning how all these systems work. Vibe Coding and LLMs basically take that away from me, so I dont think ill ever be as hyped for AI as other coders

47. I really agree with this. For me it just feel so much more fun and rewarding to build my weekend projects, especially those projects where I just want to produce and deploy a working mvp out of an idea. If trying out a new framework or whatever I find it quite the opposite though, that AI removes all the fun parts of learning (obviously)

48. Of course its fun. Making slop _is_ very fun. Its a low-effort dopamine-driven way of producing things. Learning is uncomfortable. Improving things using only your braincells can be very difficult and time consuming.

49. I have learned more - not just about my daily driver languages, but about other languages I wouldn't have even cracked the seal on, as well as layers of hardware and maker skills - in the past two years than I did in the 30 years leading up to them.

I truly don't understand how anyone creative wouldn't find their productivity soar using these tools. If computers are bicycles for the mind, LLMs are powered exoskeletons with neural-controlled turret cannons.

50. To extend the metaphor, which provides better exercise for your body? A bicycle or a powered exoskeleton with turret cannons?

51. I don't bike for exercise. I bike to get where I'm going with the least amount of friction. Different tools for different jobs.

Also: I think we can agree that Ripley was getting a good workout.

52. The rate at which I'm learning new skills has accelerated thanks to LLMs.

Not learning anything while you use them is a choice. You can choose differently!

53. How are you using AI to learn? I see a lot of people say this but simply reading AI generated overviews or asking it questions isn't really learning.

54. I'm using it to build things.

Here's an example from the other day. I've always been curious about writing custom Python C extensions but I've never been brave enough to really try and do it.

I decided it would be interesting to dig into that by having Codex build a C extension for Python that exposed simple SQLite queries with a timeout.

It wrote me this: https://github.com/simonw/research/blob/main/sqlite-time-lim... - here's the shared transcript: https://chatgpt.com/s/cd_6958a2f131a081918ed810832f7437a2

I read the code it produced and ran it on my computer to see it work.

What did I learn?

- Codex can write, compile and test C extensions for Python now

- The sqlite3_progress_handler mechanism I've been hooking into for SQLite time limits in my Python code works in C too, and appears to be the recommended way to solve this

- How to use PyTuple_New(size) in C and then populate that tuple

- What the SQLite C API for running a query and then iterating though the results looks like, including the various SQLITE_INTEGER style constants for column types

- The "goto cleanup;" pattern for cleaning up on errors, including releasing resources and calling DECREF for the Python reference counter

- That a simple Python extension can be done with ~150 lines of readable and surprisingly non-threatening C

- How to use a setup.py and pyproject.toml function together to configure a Python package that compiles an extension

Would I have learned more if I had spent realistically a couple of days figuring out enough C and CPython and SQLite and setup.py trivia to do this without LLM help? Yes. But I don't have two days to spend on this flight of curiosity, so actually I would have learned nothing.

The LLM project took me ~1 minutes to prompt and then 15 minutes to consume the lessons at the end. And I can do dozens of this kind of thing a day, in between my other work!

55. With all due respect you were reading, not learning. It's like when people watch educational YouTube videos as entertainment, it feels like they're learning but they aren't.

It's fine to use the LLMs in the same way that people watch science YouTube content, but maybe don't frame it like it's for learning. It can be great entertainment tho.

56. Disagree, it can be learning as long as you build out your mental model while reading. Having educational reading material for the exact thing you're working on is amazing at least for those with interest-driven brains.

Science YouTube is no comparison at all: while one can choose what to watcha, it's a limited menu that's produced for a mass audience.

I agree though that reading LLM-produced blog posts (which many of the recent top submissions here seem to be) is boring.

57. The YouTube analogy doesn't completely hold.

It's more like jumping on a Zoom screen sharing session with someone who knows what they're doing, asking for a tailored example and then bouncing as many questions as you like off them to help understand what they did.

There's an interesting relevant concept in pedagogy called the "Worked example effect", https://en.wikipedia.org/wiki/Worked-example_effect - it suggests that showing people "worked examples" can be more effective than making them solve the problem themselves.

58. Ok but you didn't ask any questions in the transcript you provided. Maybe that one was an outlier?

In order to learn you generally need to actually do the thing, and usually multiple times. My point is that it's easy to use an AI to shortcut that part, with a healthy dose of sycophancy to make you feel like you learned so well.

59. Yeah in this particular case I didn't ask any follow-up questions directly to Claude Code - I pasted a few things into Claude chat though, here's one of those conversations: https://claude.ai/share/9c404b38-efed-4789-bea1-06bca5f5d6e4

60. You were never able to stop using the techniques you learned, and you were always able to keep up with minimal effort - you didn’t need to learn any frameworks.

I’m glad you’re having fun, but you didn’t need AI to overcome some laborious hurdle. The only hurdle that existed was your own laziness.

61. I remember when Hacker News felt smaller. Threads were shorter. Context fit in your head. You could read the linked article, skim the comments, and jump in without feeling like you’d missed a prerequisite course.

It probably didn’t feel special at the time, but looking back, it was simpler. The entire conversation space was manageable. If you had a thought, you could express it clearly, hit “reply,” and reasonably expect to be understood.

As a single commenter, you could hold the whole discussion in your mind. From article to argument to conclusion. Or at least, it felt that way.

I’m probably romanticizing it—but you know what I mean.

Now, articles are denser. Domains are deeper. Threads splinter instantly. Someone cites a paper, someone else links a counter-paper, a third person references a decades-old mailing list post, and suddenly the discussion assumes years of background you may or may not have.

You’re expected to know the state of the art, the historical context, the common rebuttals, the terminology, and the unwritten norms—while also being concise, charitable, and original.

Every field has matured—probably for the better—but it demands deeper domain knowledge just to participate without embarrassing yourself. Over time, I found myself backing out of threads I was genuinely interested in, not because I had nothing to say, but because the cognitive load felt too high. As a solo thinker, it became harder to keep up.

> AI has entered the chat.

They’re far from perfect, but tools like Claude and ChatGPT gave me something I hadn’t felt in a long time: _leverage_.

I can now quickly:

- Summarize long articles
- Recall prior art
- Check whether a take is naïve or already debunked
- Clarify my own thinking before posting

Suddenly, the background complexity matters a lot less. I can go from “half-formed intuition” to “coherent comment” in minutes instead of abandoning the tab entirely. I can re-enter conversations I would’ve previously skipped.

> Oh no, you’re outsourcing thinking—bet it’s all slop!

Over the years, I’ve read thousands of great HN comments. Thoughtful ones. Careful ones. People who knew when to hedge, when to cite, when to shut up. That pattern is in my head now.

With AI, I can lean on that experience. I can sanity-check tone. I can ask, “Is this fair?” or “What am I missing?” I can stress-test an argument before I inflict it on strangers.

When AI suggests something wrong, I know it’s wrong. When it’s good, I recognize why. Iteration is fast. Even with back-and-forth refinement, I’m dramatically more effective at expressing what I already think.

The goal hasn’t changed: contribute something useful to the discussion. The bar is still high. But now I have a ladder instead of a sheer wall.

There’s mental space for curiosity again. My head isn’t constantly overloaded with “did I miss context?”, “is this a known bad take?”, or “will this derail into pedantry?” I can offload that checking to AI and focus on the _idea_.

That leaves room to explore. To ask better questions. To write comments that connect ideas instead of defensively hedging every sentence. To participate for the joy of thinking in public again.

It was never about typing comments fast, or winning arguments. It was about engaging with interesting people on interesting problems. Writing was just the interface.

And with today’s tools, that interface is finally lighter again. AI really has made commenting on Hacker News fun again.

Write a concise, engaging paragraph (3-5 sentences) that captures the main ideas, notable perspectives, and overall sentiment of these comments regarding the topic. Focus on the most interesting and representative viewpoints. Do not use bullet points or lists - write flowing prose.

topic

Learning while using LLMs

commentCount

61

← Back to job