Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using poor quality AI suggestions as a reason not to use Rust is a super weird argument. Something is very wrong with such idea. What's going to be next, avoiding everything where AI performs poorly?

Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.

For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.

The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).



Yes, a lot of people are reasonably going to decide to work in environments that are more legible to LLMs. Why would that surprise you?

The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.


First argument sounds like a major fallacy to me. It doesn't surprise me, but it find it extremely wrong.


Why?


Because it's a discouragement of learning based on mediocrity of AI. I find such idea perpetuating the mediocrity (not just of AI itself but of whatever it's used for).

It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.


It's not at all clear to me what this has to do with the practical delivery of software. In languages that LLMs handle well, with a careful user (ie, not a vibe coder; someone reading every line of output and subjecting most of it to multiple cycles of prompting) the code you end up with is basically indistinguishable from the replacement-level code of an expert in the language. It won't hit that human expert's peaks, but it won't generally sink below their median. That's a huge accelerator for actually delivering projects, because, for most projects, most of the code need only be replacement-grade.

Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.


It's not about delivery of software, it's about avoidance of learning based on mediocrity of AI. I.e. original post literally brings LLMs being poor at suggestions for Rust as a reason to avoid it.

That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.

For me it's obvious that it has a major negative impact on many things.


Your premise here being that any software not written in Rust must be mediocre? Wouldn't it be more productive to just figure out how to evolve LLM tooling to work well with Rust? Most people do not write Rust, so this is not a very compelling argument.


Rust is just an example in this case, not essential to the point. If someone will evolve LLM to work with Rust better, it will still be mediocre at something else, and using this as an excuse to avoid it is problematic in itself, that's what I'm saying.

Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.


> Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing?

The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.


No plausible advance in nanotechnology could produce a violin small enough to capture how badly I feel about out professional being "plagiarized" after decades of rationalizing about the importance of Star Wars to the culture justifying movie piracy.

Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.

I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.


I'm not making an argument from grievance about my own code being plagiarized. I actually don't care if my own code is used without even the attribution required by the permissive licenses it's released under; I just want it to be used. I do also write proprietary code, but that's not in the training datasets, as far as I know. But the training datasets do include code under a variety of open-source licenses, both permissive and copyleft, and some of those developers do care how their code is used. We should respect that.

As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.

And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.

https://arxiv.org/abs/2202.07646

https://dl.acm.org/doi/10.1145/3447548.3467198

https://papers.nips.cc/paper/2020/hash/1e14bfe2714193e7af5ab...


I don't mean to attribute the overwhelmingly common sentiment about intellectual property claims for things other than code to you, and I'm sorry that I communicated that (you didn't call me on it, but you'd have had every right to).

I stand by that argument, but acknowledge it isn't relevant here.

My bigger thing is just, having the experience of writing many thousands of lines of backend code with an LLM (just yesterday), none of what I'm looking at can meaningfully be described as "plagiarized". It's specific to my problem domain (indeed, to my extremely custom stack) and what isn't domain-specific is just extremely generic stuff (opening a boltdb, printing a table with lipgloss), just assembled precisely.


it could be a weird argument, but as a rust newcomer, i have to say it's really something that jumps to your face. LLMs are practically useless for anything non-basic, and rust contains a lot non-basic things.


So, what are the chances that the pendulum swings to lower-level programming via LLM-generated C/C++ if LLM-generated Rust doesn't emerge? Note that this question is a context switch from gaming to something larger. For gaming, it could easily be that the engine and culture around it (frequent regressions, etc) are the bigger problems than the language.


I haven't coded in C/C++ in years but friends who do and worked on non-trivial codebase in those languages had a really crappy experience with LLMs too.

A friend of mine only understood why i was so impressed by LLMs once he had to start coding a website for his new project.

My feeling is that low-level / system programming is currently at the edge of what LLMs can do. So i'd say that languages that manage to provide nice abstractions around those types of problems will thrive. The others will have a hard time gaining support among young developers.


Developers often pick languages and libraries based on the strength of their developer tools. Having great dev tools was a major reason Ruby on Rails took off, for example.

Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.


It's a weird idea now, but it won't be weird soon. As devs and organizations further buy into AI-first coding, anything not well-served by AI will be treated as second-class. Another thread here brought up the risk that AI will limit innovation by not being well-trained on new things.


I agree that such trend exists, but it's extremely unhealthy and if anyone, developers should have more clue how bad it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: