Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am sorry but Uiua and LLM generated code? This has to be a shitpost


Welcome to the new normal. Love it or hate it, there are now a bunch of devs who use LLMs for basically everything. Some are producing good stuff, but I worry that many don't understand the subtleties of the code they're shipping.


For me the thing that convinced me was the ability to write so much more documentation, specification, tests and formal verification stuff than I could before, that the LLM basically has no choice but to build the right thing.

OpenAIs Codex model is also ridiculously capable compared to everything else, which helps a lot.


I’ve never tried to use it for formal verification. Does it work well for that? Is it smart enough to fix formal verification errors?

The place this approach falls down for me is in refactoring. Sure, you can get chatgpt to help you write a big program. But when I work like that, I don’t have the sort of insights needed to simplify the program and make it more elegant. Or if I missed some crucial feature that requires some of the code to be rewritten, chatgpt isn’t anywhere near as helpful. Especially if I don’t understand the code as well as I would have if I authored it from scratch myself.


I never said to use LLMs to generate Uiua. I said that Uiua is an edge case where tacitness is indeed elegance.

I wouldn't write anything but Rust via LLMs, because that's the only language where I feel that the static type checking story is strong enough, in large parts thanks to kani.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: