I like the idea of having automatic code creation from papers, but I’m scared of it.
Suppose you get a paper, you automatically implement the code, and then modify it a bit with a novel idea, and publish your paper. Then somebody else does that with your paper, and does the same.. at some point, we will have a huge quantity of vibe coded code on github, and two similar papers will have very different underlying implementations, so hard to reason about and hard to change.
From a learning perspective, you try to understand the code, and it's all spaghetti, and you loose more time understanding the code than it would take to just reimplement it. You also learn a lot by not only reading the paper but reading the authors code where most of the small details reside.
And I'm not even talking about the reliability of the code, test to know that it's the correct implementation. Authors try to make papers as close as possible to the implementation but sometimes subtle steps are removed, sometimes from inadvertance, sometimes because the number of pages is lionmited.
A paper and an implementation are not one-to-one mappings
Honestly, the code from my interns have greatly improve since they use AI. And there is lots of really ugly and hard to read code from papers. So I don't think it will be an obvious loss of readability to have code completely generated by AI :)
Very interesting, do you have a specific approach to educate them in how to use LLMs? Or they do it free wheel? Do you give advices? If so which kind?
I would love to have a structured approach to help students learn to use better LLMs.
What I have observed (for uni students) is that they produce better code overall, but have no idea how it works, and would not be able to reproduce it without LLMs. (this is from first to last year)
For the moment it's a bit free wheel. And I agree the code is better but they could probably not reproduce it themself.
I honestly don't know how to "force" them to understand the code the llm write if the code is clean. But this happen when the code produce by llm is over complicated or bad and we catch that by doing code review.
I have the impression it will create even more disparity between student, students that just use llm code and the ones that try to understand it.
> you try to understand the code, and it's all spaghetti, and you loose more time understanding the code than it would take to just reimplement it.
I agree with you in general, but maybe the jump would be similar to the one from hand-written punchcards/assembly to higher level compilers. Very few people worry about the asm generated from GHC for example. So maybe a lot of code would be like that. I also imagine at some point a better intermediate language for LLMs to generate will be discovered and suddenly that's how most programs will be written.
I would love that, I mostly work with ideas and the codes are implementation details for me, so yes, in some way, having automated code generation would allow me to be way more productive. I'm not against it, I'm just scared about the efficiency of the approach by an LLM (at the moment at least)
The example codes they give is 'implementing deep learning papers', I find those papers the easiest to implement compared to some obscure algorithm for example that can't rely on frameworks such as pytorch and where speed is critical.
I can't find the essay, but I think it was wolfram that wrote that we should let students use Mathematica and educate them in using it from a young age, the rationale behind is: before you had to use logarithmic tables, and it took much time during the education. Then, with the event of the calculator, students could instantaneously compute logarithms, so they could focus on more advanced ideas that use them. With Mathematica they could automatically execute matrix operations, so they would spend most of the time thinking about matrix operations instead of just learning how to manipulate a matrix by hand.
So with more powerful tools, you can expand the capabilities faster.
But the main difference I see here, is that maths are precise and well defined. Here you get a software which is a sample in the space of possible softwares that solve the problem (if you are lucky).
To get to the metaphorical point punchcards->GHC you need a LLM tool that give always the same answer, and hopefully, the optimal one, and with small changes in the paper, it moves the software in the space of viable softwares only a bit. Maybe we will get there, but this is not yet what this paper proposes
Suppose you get a paper, you automatically implement the code, and then modify it a bit with a novel idea, and publish your paper. Then somebody else does that with your paper, and does the same.. at some point, we will have a huge quantity of vibe coded code on github, and two similar papers will have very different underlying implementations, so hard to reason about and hard to change.
From a learning perspective, you try to understand the code, and it's all spaghetti, and you loose more time understanding the code than it would take to just reimplement it. You also learn a lot by not only reading the paper but reading the authors code where most of the small details reside.
And I'm not even talking about the reliability of the code, test to know that it's the correct implementation. Authors try to make papers as close as possible to the implementation but sometimes subtle steps are removed, sometimes from inadvertance, sometimes because the number of pages is lionmited.
A paper and an implementation are not one-to-one mappings