Author here - I agree that to learn the best thing is to implement and fail along the way. My point was I would never professionally opt to write a sorting algorithm instead of using the builtin sort() most languages come equipped with
I did a post [0] about this last year, and vanilla LLMs didn’t do nearly as well as I’d expected on advent of code, though I’d be curious to try this again with Claude code and codex
> LLMs, and especially coding focused models, have come a very long way in the past year.
I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.
I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.
Last April I asked Claude Sonnet 3.7 to solve AoC 2024 day 3 in x86-64 assembler and it one-shotted solutions for part 1 and 2(!)
It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.
Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.
Let me ask the same with:
- runs on a laptop CPU
- decide if a long article is relevant to a specified topic. Maybe even a summary of the article or picking the interesting part as specified in prompt instructions.
- no fine tuning please.
Though instead of being a single file, you and LLMs cater your context to be easily searchable (folders and files). It’s all version controlled too so you can easily update context as projects evolves.