For me it all the build stuff and scaffolding I have to get in place before I can even start tinkering on a project. I never formally learned all the systems and tools and AI makes all of that 10x easier. When I hit something I cannot figure out instead of googling for 1/2 hour it is 10 minutes in AI.
Exactly. And I was never particularly good at coding, either. Pairings with Gemini to finally figure out how to decompile an old Java app so I can make little changes to my user profile and some action files? That was fun! And I was never going to be able to figure out how to do it on my own. I had tried!
Fair enough. But that particular could be anything that has been bothering you but you didn’t have the time or expertise to fix yourself. I wanted that fixed, and I had given up on ever seeing it fixed. Suddenly, in only two hours, I had it fixed. And I learned a lot in the process, too!
As I've gotten more experience I've tended to find more fun in tinkering with architectures than tinkering with code. I'm currently working on making a secure zero-trust bare metal kubernetes deployment that relies on an immutable UKI and TPM remote attestation. I'm making heavy use of LLMs for the different implementation details as I experiment with the architecture. As far as I know, to the extent I'm doing anything novel, it's because it's not a reasonable approach for engineering reasons even if it technically works, but I'm learning a lot about how TPMs work and the boot process and the kernel. I still enjoy writing code as well, but I see them as separate hobbies. LLMs can take my hand-optimized assembly drag racing or the joy of writing a well-crafted library from my cold dead hands, but that's not always what I'm trying to do and I'll gladly have an LLM write my OCI layout directory to CPIO helper or my Bazel rule for putting together a configuration file and building the kernel so that I can spend my time thinking about how the big pieces fit together and how I want to handle trust roots and cold starts.
It's a little shameful but I still struggle when centering divs on a page. Yes, I know about flexbox for more than a decade but always have to search to remember how it is done. So instead of refreshing that less used knowledge I just ask the AI to do it for me. The implications of this vs searching MDN Docs is another conversation to have.
Hah, centering divs with flexbox is one of my uses for this too! I can never remember the syntax off the top of my head, but if I say "center it with flexbox" it spits out exactly the right code every time. If I do this a few more times it might even stick in my head.
Yep, that’s not a bad approach, either. I did that a lot initially, it’s really only with the advent of Claude Code integrated with VS Code that I’m learning more like I would learn from a code review. It also depends on the project. Work code gets a lot more scrutiny than side projects, for example.
If only it were that easy. I got really good at centering and aligning stuff, but only when the application is constructed in the way I expect. This is usually not a problem as I'm usually working on something I built myself, but if I need to make a tweak to something I didn't build, I frequently find myself frustrated and irritated, especially when there is some higher or lower level that is overriding the setting I just added. As a bonus, I pay attention to what the AI did and its results, and I have actually learned quite a bit about how to do this myself even without AI assistance
> AI assistance means you can get something useful done in half an hour, or even while you are doing other stuff. You don't need to carve out 2-4 hours to ramp up any more. That fits my experience with a chrome extension I created. Instead of having to read the docs, find example projects, etc, I was able to get a working version in less than a hour.
I experienced the exact same thing: I needed a web tool, and as far as I could tell from recent reviews, the offerings in the chrome extension store seemed either a little suspicious or broken, so I made my own extension in a little under an hour. It used recent APIs and patterns that I didn't have to go read extensive docs for or do deep learning on. It has an acceptable test suite. The code was easy to read, and reasonable, and I know no one will ever flip it into ad-serving malware by surprise. A big thing is just that the idea of creating a non-trivial tool is suddenly a valid answer to the question. Previously, I know would have had to spend a bunch of time reading docs, finding examples, etc., let alone the inevitable farting around with a minor side-quest because something wasn't working, or rethinking+reworking some design decision that on the whole wasn't that important. Instead, something popped into existence, mostly worked, and I could review and tweak it. It's a little bit like jumping from a problem of "solve a polynomial" to one of "verify a solution for a polynomial".
One concern is those less experienced engineers might never become experienced if they’re using AI from the start. Not that everyone needs to be good at coding. But I wonder what new grads are like these days. I suspect few people can fight the temptation to make their lives a little easier and skip learning some lessons.
A year or so ago I was seriously thinking of making a series of videos showing how coding agents were just plain bad at producing code. This was based on my experience trying to get them to do very simple things (e.g. a five-pointed star, or text flowing around the edge of circle, in HTML/CSS). They still tend to fail at things like this, but I've come to realize that there are whole classes of adjacent problems they're good at, and I'm starting to leverage their strengths rather than get hung up on their weaknesses. Perhaps you're not playing to their strengths, or just haven't cracked the code for how to prompt them effectively? Prompt engineering is an art, and slight changes to prompts can make a big difference in the resulting code.
- Providing boilerplate/template code for common use cases - Explaining what code is doing and how it works - Refactoring/updating code when given specific requirements - Providing alternative ways of doing things that you might not have thought of yourself YMMV; every project is different so you might not have occasion to use all of these at the same time.
I’m better at it in the spaces where I deliver value. For me that’s the backend, and I’m building complex backends with simple frontends. Sounds like your expertise is the front end, so you’re gonna be doing stuff that’s beyond me, and beyond what the AI was trained on. I found ways to make the AI solve backend pain points (documentation, tests, boiler plate like integrations). There’s probably spaces where the AI can make your work more productive, or, like my move into the front end, do work that you didn’t do before.
In particular, and speaking as a backend engineer with zero web design skills, building things with charts/graphs is amazing nowadays! You can literally just operate at the level of "add another line representing the foo data", "add a scatterplot below it", "make them line up", "actually, make it a more reddish pink" etc. In the past I've had opinions about d3 and vega-lite and altair and matplotlib etc and learned how to use those ones at a superficial level at least. In my last personal UI with charts I didn't even ask it what framework it had chosen (chart.js is the answer)
I really agree with this. For me it just feel so much more fun and rewarding to build my weekend projects, especially those projects where I just want to produce and deploy a working mvp out of an idea. If trying out a new framework or whatever I find it quite the opposite though, that AI removes all the fun parts of learning (obviously)
I have learned more - not just about my daily driver languages, but about other languages I wouldn't have even cracked the seal on, as well as layers of hardware and maker skills - in the past two years than I did in the 30 years leading up to them. I truly don't understand how anyone creative wouldn't find their productivity soar using these tools. If computers are bicycles for the mind, LLMs are powered exoskeletons with neural-controlled turret cannons.
The rate at which I'm learning new skills has accelerated thanks to LLMs. Not learning anything while you use them is a choice. You can choose differently!
I'm using it to build things. Here's an example from the other day. I've always been curious about writing custom Python C extensions but I've never been brave enough to really try and do it. I decided it would be interesting to dig into that by having Codex build a C extension for Python that exposed simple SQLite queries with a timeout. It wrote me this: https://github.com/simonw/research/blob/main/sqlite-time-lim... - here's the shared transcript: https://chatgpt.com/s/cd_6958a2f131a081918ed810832f7437a2 I read the code it produced and ran it on my computer to see it work. What did I learn? - Codex can write, compile and test C extensions for Python now - The sqlite3_progress_handler mechanism I've been hooking into for SQLite time limits in my Python code works in C too, and appears to be the recommended way to solve this - How to use PyTuple_New(size) in C and then populate that tuple - What the SQLite C API for running a query and then iterating though the results looks like, including the various SQLITE_INTEGER style constants for column types - The "goto cleanup;" pattern for cleaning up on errors, including releasing resources and calling DECREF for the Python reference counter - That a simple Python extension can be done with ~150 lines of readable and surprisingly non-threatening C - How to use a setup.py and pyproject.toml function together to configure a Python package that compiles an extension Would I have learned more if I had spent realistically a couple of days figuring out enough C and CPython and SQLite and setup.py trivia to do this without LLM help? Yes. But I don't have two days to spend on this flight of curiosity, so actually I would have learned nothing. The LLM project took me ~1 minutes to prompt and then 15 minutes to consume the lessons at the end. And I can do dozens of this kind of thing a day, in between my other work!
With all due respect you were reading, not learning. It's like when people watch educational YouTube videos as entertainment, it feels like they're learning but they aren't. It's fine to use the LLMs in the same way that people watch science YouTube content, but maybe don't frame it like it's for learning. It can be great entertainment tho.
The YouTube analogy doesn't completely hold. It's more like jumping on a Zoom screen sharing session with someone who knows what they're doing, asking for a tailored example and then bouncing as many questions as you like off them to help understand what they did. There's an interesting relevant concept in pedagogy called the "Worked example effect", https://en.wikipedia.org/wiki/Worked-example_effect - it suggests that showing people "worked examples" can be more effective than making them solve the problem themselves.
Ok but you didn't ask any questions in the transcript you provided. Maybe that one was an outlier? In order to learn you generally need to actually do the thing, and usually multiple times. My point is that it's easy to use an AI to shortcut that part, with a healthy dose of sycophancy to make you feel like you learned so well.
Disagree, it can be learning as long as you build out your mental model while reading. Having educational reading material for the exact thing you're working on is amazing at least for those with interest-driven brains. Science YouTube is no comparison at all: while one can choose what to watcha, it's a limited menu that's produced for a mass audience. I agree though that reading LLM-produced blog posts (which many of the recent top submissions here seem to be) is boring.
I agree with this. I've been able to tackle projects I've been wanting to for ages with LLMs because they let me focus on abstractions first and get over the friction of starting the project. Once I get my footing, I can use them to generate more and more specialized code and ultimately get to a place where the code is good.