Its possible to dislike Rust but pragmatically use it. Personally, I do not like Rust, but it is the best available choice for some work and personal stuff.
Personally I think most programming languages have really ... huge problems. And the languages that are more fun to use, ruby or python, are slow. I wonder if we could have a great, effective, elegant language that is also slow. All that try end up with e. g. with a C++ like language.
Honestly I find writing Rust more fun than writing Python. Python just doesn't scale, any non-trivial quantity of it has a habit of turning into spaghetti however hard I try to be disciplined.
Rust, although annoying at a micro scale, does at least enforce some structure on your code, although like Kling I miss OO.
AI has made Rust approachable to a new audience of programmers who didn't want to dedicate their life to learning the ins and outs of the language. Especially for C++ developers who already learned the ins and outs of a hyper complex programming language and don't want to go through that a second time.
Before AI, writing Rust was frustrating experience that involved spending 90% of your time reading documentation and grumbling that "I could do this in 5 minutes in C++"
Now I can write Rust in a way that makes sense to my C++ addled brain and let the AI do the important job of turning it into an idiomatic Rust program that compiles.
Its for the time being is stuck with LLVM, so I can't currently LTO with GCC objects. Its got a lot higher complexity than I perfer in a language. A lot of features I find important seem perma-unstable. Pin is unnessesarily confusing. No easy way to define multiple compilation units for use with linker object selection and attribute constructor. The easy path is downloading binary toolchains with rustup and not using your disto package manager. You can't use unstable features without the bootstrap env var on distro rust toolchains. Cargo leads to dependency bloat. The std/core crates are prebuilt binaries and bloat binary sizes. Bindgen doesn't translate static inline code. The language has a ton of stuff it exposes just to std and not user code. Unsafe code is unergonomic. No easy way to model a cleanup function that needs more args. No support for returns_twice. No ability to use newer stuff like preserve_none. Can't go-to-definition from a bindgen binding to original header file. Macros pollute global namespace. Can't account for platforms where size_t and uintptr_t are different. Traits can only be relied on if marked unsafe. Can't implement something like defer since it holds a borrow. no_std code still can pull in core::fmt. Can't enforce dependencies are also no_std. Panics are considered safe. No way to add non-function fields to dyn vtables. No way to declare code separately from definition. No way to have duplicate type definitions that merge, making interop between different bindgen generated modules annoying.
that said, this being 2023 and them being properly shamed might be why their site now says it's all local and TÜV certified (which i doubt means much, but still)? like outrage actually forced them to get their act together?
I think you are entirely missing the author's point. The author is generalizing from the specific technicalities of C/Rust/etc UB, to the problem with UB which is that should it be triggered, then you can't know what the program will do. This does not have to be the result of language specification. If writing safe Rust yourself, yes no UB will occur usually, and you can know what will happen based off of what code you wrote. The author extends UB to vibecoding where there is no specification to understand the translation of prompts to code. Without thorough review, you are unable to be sure that the output code matches the intent of your prompting, which is analagous to writing code with UB. The issue the author has with vibecoded Rust is not that the code can trigger undefined behavior at the language layer, but that the perfectly "safe" code generated may not at all match the intended semantics.
The problem with the author's argument is the inductions don't follow from the premise. With defined C, you can in principle look at a piece of code and know what it will do in the abstract machine (or at least build a model dependent on assumptions about things like unspecified behavior). Actually doing this may be practically impossible, but that's not the point. It's not possible in the presence of UB. You can't know what a piece of code containing UB will do, even in principle.
You can in principle read the LLM's output and know that it won't put your credentials on the net, so it's not the same as UB. Maybe there are practical similarities to UB in how LLM bugs present, but I'm not sure it's a useful comparison and it's not the argument the author made.
The practical impossibility is a real issue, see the recent post about booleans in Doom where the author knew what the problem was, where it was, and what it was, but after reading through the standards still couldn't really find where in the standards it was forbidden, eventually saying "it's probably this bit because I can't find anything else that fits".
And when the author of the current post says:
Turn on all linting, all warnings,
this doesn't help. I've seen code compiled with -Wall -Wextra -Wtf that produces zero warnings but for which gcc happily outputs code that segfaults, crashes, or otherwise breaks catastrophically when run. So the compiler is saying "I've found UB here, I'm not going to say anything despite maximum warnings being turned on, I'm just going to output code that I know will fail when run".
> The problem with the author's argument is the inductions don't follow from the premise.
That's possible. No one ever accused me of sound arguments :-)
I would still like to address your comment anyway.
Lets call this assertion #1:
> With defined C, you can in principle look at a piece of code and know what it will do in the abstract machine ... It's not possible in the presence of UB.
And lets call this assertion #2:
> You can in principle read the LLM's output and know that it won't put your credentials on the net, so it's not the same as UB.
With Assertion #1 you state you are not examining the output of the compiler, you are only examining the input (i.e. the source code).
With Assertion #2, you state you are examining the output of the LLM, and you are not examining the input.
IOW, these two actions are not comparable because in one you examine only the input while in the other you examine only the output.
In short: you are comparing analysing the input in one case with analysing the output in another case.
For the case of accidentally doing $FOO when trying to do $BAR:
1. No amount of input-analysis on LLM prompts will ever reveal to you if it generated code that will do $FOO - you have to analyse the output. There is a zero percent chance that examining the prompt "Do $BAR" will reveal to the examiner that their credentials will be leaked by the generated code.
2. There is a large number of automated input-analysis for C that will catch a large number of UB that prevents $FOO, when the code implements "Do $BAR". Additionally, while a lot of UB gets through, a great deal are actually caught during review.
Think of the case: "I wrote code to add two numbers, but UB caused files to get deleted off my computer"
In C, this was always possible (and C programmers acted accordingly). In Java, C#, Rust, etc this was never possible. Unless your code was generated by an LLM.
That's a good point, I didn't realize I was implicitly mixing up inputs and outputs.
I think you're imagining a very particular way of using LLMs though. The source code is the source of truth in traditional development. It's the artifact we preserve long term and the one that's used to regenerate ephemeral artifacts like binaries. When you regenerate binaries from source code containing UB, the result may not behave the same as before. Each binary's semantics can be individually understood, but not the semantics of future translations.
If you treat the entire LLM->binary system as a black box, then yeah. I agree there's no reasonable way to go from input to output semantics, much as there isn't if you ask a human. But people generally aren't using the prompt as the source of truth. They're using the code that's produced, which (in the absence of traditional UB) will have the same semantics every time it's used even if the initial LLM doesn't.
If that's the author's point then the article needs a rewrite. I suspect that was _not_ the author's point and it's offered as a good faith but misplaced post-hoc justification.
>> Without thorough review, you are unable to be sure that the output code matches the intent of your prompting, which is analagous to writing code with UB.
> If that's the author's point then the article needs a rewrite. I suspect that was _not_ the author's point and it's offered as a good faith but misplaced post-hoc justification.
I am the author (thanks for giving some of your valuable attention to my post; much appreciated :-), and I can confirm that the `>> ...` quoted bit above is my point, and this bit of my blog-post is where I made that specific point
> As of today 2, there is a large and persistent drive to not just incorporate LLM assistance into coding, but to (in the words of the pro-LLM-coding group) “Move to a higher level of abstraction”.
> What this means is that the AI writes the code for you, you “review” (or not, as stated by Microsoft, Anthropic, etc), and then push to prod.
> Brilliant! Now EVERY language can exhibit UB.
Okay, fair enough, I'm not the worlds best writer, but I thought that bit was pretty clear when I wrote it. I still think it's clear. Especially the "Now EVERY language can exhibit UB" bit.
I'm now half inclined to paste the entire blog into a ChatAI somewhere and see what it thinks my conclusion is...
It's going to be optional - the hooks will always fix the code if they can, but then you can supply a `--no-fix` flag (or config) if you want to tell it to not actually apply those changes to the real filesystem.
It doesn't need Landlock because WASI already provides a VFS.
It depends. I wrote a pre-commit hook (in shell, not precommit the tool) at a previous job that ran terraform fmt on any staged files (and add the changes to the commit) because I was really tired of having people push commits that would then fail for trivial things. It was overrideable with an env var.
IMO if there’s a formatting issue, and the tool knows how it should look, it should fix it for you.
The standard way for this with current tools is to have the formatter/linter make the changes but exit with a non-zero status, failing the hook. Then the person reviews the changes, stages, and commits. (That's what our setup currently has `tofu fmt` do.)
But if you don't want to have hooks modify code, in a case like this you can also just use `tofu validate`. Our setup does `tflint` and `tofu validate` for this purpose, neither of which modifies the code.
This is also, of course, a reasonable place to have people use `tofu plan`. It you want bad code to fail as quickly as possible, you can do:
tflint -> tfsec -> tofu validate -> tofu plan
That'll catch everything Terraform will let you catch before deploy time— most of it very quickly— without modifying any code.
> make the changes but exit with a non-zero status
That's reasonable. My personal (and that of my team at the time) take was that I was willing to let formatting - and only formatting - be auto-merged into the commit, since that isn't going to impact logic. For anything else, though, I would definitely want to let submitter review the changes.
The amount of paranoia I need for unsafe Rust is orders of magnitudes higher than C. Keeping track of the many things that can implicity drop values and/or free memory, and figuring out if im handling raw pointers and reference conversions in a way that doesn't accidentally alias is painful. The C rules are fewer and simpler, and are also well known, and are aleviated and documented by guidelines like MISRA. Unsafe Rust has more rules, which seem underspecified and underdocumented, and also unstable. Known unknowns are preferable over unknown unknowns.
I've been thinking of writing a language with Rust's ergonomics but less of the memory safety stuff. I prefer using no dynamic allocations, in which case the only memory safety feature I need is leaking references to locals into outer scopes. As for the thread safety stuff, most of my stuff is single-threaded.
The code I have in C is often code that does't fit in Rusts safety model. Dealing with ffi is annoying because slices have no defined layout. `dyn` is limited compared to what I can do with a manual vtable. I have seriously attempted porting my personal stuff to Rust, but theres enough papercuts that I go back to C. I want the parts of Rust I find to be helpful without those parts I don't.
To add to the sibling comments, a world-class LSP enabling you to get a great experience in any editor/IDE out of the box. This is not at all exclusive to Rust of course, most strongly typed languages have one at this point, but I've been working in Python lately and this is what I miss the most. (I'm using an LSP in Python, but it isn't as good at the best of times, and it seems like no matter how many times I fix it's configuration it's broken again the next day.)
I love how everything is an expression with a value. And match expressions are quite nice, especially with the option type. I really miss those when working in javascript.
I vaguely remember reading when this occurred. It was very recent no? Last few years for sure.
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
Ultimately, CPU schedulers are about choosing which attributes to weigh more heavily. See this[0] diagram from Github. EEVDF isn't a straight upgrade on CFS. Nor is LAVD over either.
Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.
The kernel policy for CVEs is any patch that is backported, no? So this is just the first Rust patch, post being non-experimental, that was backported?
I haven't looked at fs-verity at all, but per-file merkle trees should definitely be built into the filesystem as a standard thing. Rsync and file transfer programs want it, and having to compute that every time is crap - if it's built into the filesystem it can easily be computed lazily and invalidated if need be.
(obligatory plug for people to jump in and get involved with writing code if they want to see more of this stuff happen)
Seems like you could set up a cert for a honeypot domain to collect ips of bots running off of the certificate transparency logs. If domain isnt linked from anywhere, then its pretty sure to be a bot isn't it?
reply