This is exactly how I use them too! What I usually do is give the LLM bullet points or an outline of what I want to say, let it generate a first attempt at it, and then reshape and rewrite what I don’t like (which is often most of it). I think, more than anything, it just helps me to quickly get past that “staring at a blank page” stage.
I love the way you've expressed this! This was my experience doing the lit review for my PhD. I read more than a few "scholarly" texts that were perhaps impressive from a language/vocabulary standpoint but contained ideas that were either poorly supported or too simple to amount to any meaningful contribution to the field.
I felt this comment in my soul. I’ll never understand it: I’ve written thousands of lines of code (as a hobbyist) to solve all sorts of problems I’ve run into and yet always seem to struggle to wrap my mind around the core algorithms any real developer should be able to handle easily. This is why I’ve never pursued programming as a career.
It took computer scientists of the past, a lot of effort to come up with these complicated algorithms. They are not easy or trivial. They are complicated and that's OK that you can't just quickly understand them. Your imaginary "real developer" at best memorised the algorithms, but that hardly differs from smart monkey, so probably not something to be very proud of.
It is your choice which career to pursue, but in my experience, absolute majority of programmers don't know algorithms and data structures outside of very shallow understanding required to pass some popular interview questions. May be you've put too high artificial barriers, which weren't necessary.
To be a professional software developer, you need to write code to solve real life tasks. These tasks mostly super-primitive in terms of algorithms. You just glue together libraries and write so-called "business-logic" in terms of incomprehensible tree of if-s which nobody truly understands. People love it and pay money for it.
Thanks for your kind comment! I do not have any systematic leaning of computer science; I often feel confused when reading textbooks on algorithms hahaha.
Should I be familiar with every step of Dijkstra’s search algorithm and remember the pseudocode at all times? Why don’t the textbooks explain why the algorithm is correct?
> Should I be familiar with every step of Dijkstra’s search algorithm and remember the pseudocode at all times?
Somehow, I think you already know the answer to that is "no".
I've been working as a software engineer for over 8 years, with no computer science education. I don't know what Dijkstra's search algorithm is, let alone have memorised the pseudocode. I flicked through a book of data structures and algorithms once, but that was after I got my first software job. Unless you're only aiming for Google etc, you don't really need any of this.
You should know the trade-offs of different algorithms, though. Many libraries let you choose the implementation for a spcific problem. For instance tree vs. hash map where you trade memory for speed.
> Why don’t the textbooks explain why the algorithm is correct?
The good ones do!
> Should I be familiar with every step of Dijkstra’s search algorithm and remember the pseudocode at all times?
If it’s the kind of thing you care to be familiar with, then being able to rederive every step of the usual-suspect algorithms is well within reach, yes. You don’t need to remember things in terms of pseudocode as such, more just broad concepts.
Something that helps, I think, is to make something practical that demands it.
I used to think alike (I'm +30 year programing) until I decide to do https://tablam.org, and making a "database" is the kind of project where all this stuff suddenly is practical and worthwhile.
"AI isn't great at creating software, but it is great at writing functions."
This 100%. In my experience (ChatGPT - paid account), it often causes more problems than it solves when I ask it to do anything complex, but for writing functions that I describe in simple English, much like your example here, it has been overall pretty amazing. Also, I love asking it to generate tests for the function it writes (or that I write!). That has also been a huge timesaver for me. I find testing to be so boring and yet it's obviously essential, so it's nice to offload (some of) that to an LLM!
I have never heard of “jq”. Oh my goodness. Your comment may have just changed my life. I cannot emphasize enough how many times I have needed a tool like this (and, yes, shame on me for not making a better effort to find one). Thank you!
Yeap the syntax and semantics is quite different to other languages and it really took me a long time and deep understanding to really appreciate how expressive and damn well designed it is.
Wow, I’ve never thought about that, but you’re right! It really has trained me to be skeptical of what I’m being taught and confirm the veracity of it with multiple sources. A bit time-consuming, of course, but generally a good way to go about educating yourself!
I genuinely think that arguing with it has been almost a secret weapon for me with my grad school work. I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.
I guess at some level this is almost what "prompt engineering" is (though I really hate that term), but I use it as a learning tool and I do think it's been really good at helping me cement concepts in my brain.
> I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.
Interesting, that's the basic process I follow myself when learning without ChatGPT. Comparing my mental representation of the thing I'm learning to existing literature/results, finding the disconnects between the two, reworking my understanding, wash rinse repeat.
I guess a large part of it is just kind of the "rubber duck" thing. My thoughts can be pretty disorganized and hard to follow until I'm forced to articulate them. Finding out why ChatGPT is wrong is useful because it's a rubber duck that I can interrogate, not just talk to.
It can be hard for me to directly figure out when my mental model is wrong on something. I'm sure it happens all the time, but a lot of the time I will think I know something until I feel compelled to prove it to someone, and I'll often find out that I'm wrong.
That's actually happened a bunch of times with ChatGPT, where I think it's wrong until I actually interrogate it, look up a credible source, and realize that my understanding was incorrect.
Yeah, I love C++ as a hobbyist programmer but will readily admit I basically use it like C but with the added convenience of strings & classes. Hardly how you're supposed to use it these days, but this is fine for my toy projects. But I can't imagine using the language professionally where you've got to be aware of best practices, modern enhancements, etc. Seems overwhelming!
This is indeed a valid question, but I think it’s largely up to the user. For me, as a hobbyist programmer that sometimes writes code at work to automate certain tasks, I use ChatGPT to quickly create boilerplate/template type code AND to learn how to do new things. When I’m asking how to do something new, I try to actually learn what’s going on so that I won’t have to keep asking about that particular issue. But yeah, the temptation to just say “thanks, ChatGPT” and move on without learning anything is certainly there and could be quite harmful to one’s overall coding skills.
I think the interesting part of that is, is there money in it? I could see it being useful for hobbyists that don't care about programming that much, but would you pay for that?
I'm trying to remain impressed and open-minded, and there's no question these things are super cool, and something genuinely exciting. And who knows how much they'll improve over the next few years? But so far, if I'm brutally honest, they have largely been frustrating to work with. I am a teacher, and I use them (mainly via APIs) to generate questions for worksheets and study guides for my students. The idea of letting the computer help me save time on creating those kinds of exercises so that I can then focus on the more interesting/challenging activities initially sounded so good, but I've found I'm completely unable to trust these systems to generate correct information. Almost every question set or study guide I've generated had at least one, sometimes several, serious errors that would have led my students astray.
It does still save time, and I'm sure it will get better. But like you said, the fact that I have to constantly correct its output basically eliminates the possibility, at least for now, of using it as a tutor to teach me material that is genuinely new to me (something I was looking forward to doing). I simply can't trust it for that at this point. We'll see how things evolve.
I own one, but almost never use it. For quick, few-steps calculations I’ll use the iPhone’s calculator app. When I have more complex calculations, I almost always find myself needing to remember and label different values, so I’ll just type “python3” in the terminal and get to work ;)
reply