Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Python is just a beautiful, well-designed language - in an era where LLM's generate code, it is kind of reassuring that they mostly generate beautiful code and Python has risen to the top. If you look at the graph, Julia and Lua also do incredibly well, despite being a minuscule fraction of the training data.

But Python/Julia/Lua are by no means the most natural languages - what is natural is what people write before the LLM, the stuff that the LLM translates into Python. And it is hard to get a good look at these "raw prompts" as the LLM companies are keeping these datasets closely guarded, but from HumanEval and MBPP+ and YouTube videos of people vibe coding and such, it is clear that it is mostly English prose, with occasional formulas and code snippets thrown in, and also it is not "ugly" text but generally pre-processed through an LLM. So from my perspective the next step is to switch from Python as the source language to prompts as the source language - integrating LLM's into the compilation pipeline is a logical step. But, currently, they are too expensive to use consistently, so this is blocked by hardware development economics.



mhhm yes yes. There's a thread of discussion that I didn't quite chose to delve into in the post, but there is something interesting to be found in the observation that languages that are close to natural language (Python being famous for being almost executable pseudo-code for a while) being easier for LLMs to generate.

Maybe designing new languages to be close to pseudo-code might lead to better results in terms of asking LLMs to generate them? but there's also a fear that maybe prose-like syntax might not be the most appropriate for some problem domains.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: