Hacker Newsnew | past | comments | ask | show | jobs | submit | baranul's commentslogin

A major part of this problem appears to be that there is no identifiable humans in the loop to bring complaints to. Many of Google's responses are automated and black box algorithms.

When a Google response to a problem is outright bonkers, there is often not much that can be done, but to keep hitting the head on the wall (hoping something different happens) or be the lucky few that can get or has a human contact at Google. From what I've read and heard, those with human contacts, often have been identified as needing special attention. Where they are persons who are making significant money for Google and the businesses they own or can create problems in court.


Companies want the money and continual engagement. People getting addicted to AI, as trusted advisor or friend, is money in their pockets. Just like having people addicted to gambling or alcohol, it's all big business.

It's becoming even more apparent, that there is a line between using AI as a tool to accomplish a task versus excessively relying on it for psychological reasons.


Every project and programmer shouldn't feel they have to justify their choice not to use Rust (or Zig), who seem to be strangely and disproportionately pushed on Hacker News and specific other social media platforms. This includes the pressure, though a bit less in recent years, to use OOP.

If they are getting good results with C and without OOP, and people like the product, then those from outside the project shouldn't really have any say on it. It's their project.


> strangely and disproportionately pushed on Hacker News

There is literally nothing strange or disproportionate. It's incredibly obvious that new languages, that were designed by people who found older languages lacking, are of interest to groups of people interested in new applications of technology and who want to use their new languages.

> then those from outside the project shouldn't really have any say on it. It's their project.

People outside the project are allowed to say whatever the hell they want, the project doesn't have to listen.


I think it's more than just the normal amount for advocacy of a new language. Rust isn't the only "newer" language. I don't feel this kind of mentally strung pushing of say Kotlin or Scala or Go or, etc from their fans.

I think this is because of the gap in its target market -- Rust is firmly positioned to replace C and C++, which have a long history of safety issues. Kotlin is positioned to replace java, and besides a few quality-of-life improvements, it changes some syntax but very few semantics, so the gap is much smaller. Go was originally pitched as a C or C++ replacement, and it's very nice for deeply parallel programs like web services, but its big runtime and its GC rule it out for a lot of (especially very low level or latency-critical) uses -- you don't see Go ported to microcontrollers for instance. I can't speak for Scala, because i don't have experience writing software in it.

To summarize, Rust provides a lot of compile-time discipline that C and C++ are lacking, and many people are tired of maintaining code that was developed without that discipline. Rust makes it harder to write low-effort software.

As a programmer that picks up a new language every 2-3 years (and one that is privileged to have an employer that tolerates this) Rust is really a breath of fresh air; not because it's easier, but because it codifies and enforces some good habits in ways that other languages don't.


There is no chance that Kotlin will replace Java. Java is the platform and Kotlin does change semantics. They’ve developed their own features that don’t align with the platform. Suspend functions vs virtual threads, data classes vs records, Kotlin value classes vs Java value classes. The gap is widening.

I‘m in the process of migrating Kotlin code back to Java in our product. My experiment with Kotlin is over and I‘m sticking 100% with Java. I like writing Kotlin, but I dislike reading Kotlin code.

Strange how people never say this with swift and objc

Probably because almost everyone avoids those languages at all costs

I don't agree.

First of all, Java isn't a platform. Kotlin and Java are both just languages, and Kotlin has explicit interoperability with Java exactly to make it easy for Java devs to "upgrade".

The JVM is a common target for both Java and Kotlin, where the two are intentionally interoperable - from the Kotlin-side, by virtue of explicit annotation. Both languages have other targets through other compilers, e.g., Kotlin's native backend and GraalVM.

The widening gap is not at all moving Kotlin further away from Java developers, but is just increasing the reasons to migrate. It is crucially not making interoperability with existing, legacy Java harder, just giving you more toys. Stuff like suspend functions vs. virtual threads only affects decision making in new application code, and you can for all intents and purposes use either depending on whether you're writing new Kotlin libs for a legacy Java app or a Kotlin app using legacy Java libs.

The C → Rust migrations that happen a lot these days underline how differences in features isn't at all a problem (quite the opposite when there's new features), but that interoperability allowing partial work is by far the most important thing.

Plus, considering that Android apps were responsible for a very significant portion of actively developed Java (I would assume quite a loand has with Android having gone full Kotlin, a quite significant portion of Java developers will either already have migrated or soon be migrating to follow suit. This will over time affect a quite significant portion of the available skill pool for hiring, which will add additional pressure on enterprise.

There will always be Java, but I'd expect a significant portion of actively developed applications (as opposed to maintenance-mode only applications) to slowly migrate to either Kotlin or something else entirely.


I think you're widely mistaken if you don't think Java is a platform. The VM and language are intertwined. The VM explicitly knows about things like records, language constructs, and idioms from Java. Java sets the direction of the VM, not Kotlin.

JVM does not know about Java, it knows about an IR that Java, Kotlin, Scala, Closure, Groovy, etc. all target. Java as a language also doesn't know about JVM, as it commonly targets things that are not the JVM, whether that is Dalvik, Graal or something else entirely.

That the JVM and IR has features to help the Java compiler generate better output is obvious but not really relevant. Modern CPUs also have instructions to help C compiles generate better code, but that doesn't make them C platforms. It's just that implementation details.

So no, Java is not a platform. It is a language that sometimes runs on the JVM together with many other large and quite influential languages.


> First of all, Java isn't a platform.

You are being facetious. I mean, do you actually believe that the JVM exists in a context where Java does not exist? What does the J in JVM stand for?

> The JVM is a common target for both Java and Kotlin, where the two are intentionally interoperable (...)

Yes, in the sense that Java exists and Kotlin by design piggybacks on the JVM.

> The C → Rust migrations that happen a lot these days underline how differences in features isn't at all a problem (quite the opposite when there's new features), but that interoperability allowing partial work is by far the most important thing.

This analysis is very superficial and fails to identify any of the real world arguments to switch from C. For example, Microsoft outright strangled C by wasting many years refusing to support any standard beyond C89, in spite of being directly involved in it's drafts. This was a major contribution to address any of the pain points and DX shortcomings. Compare the evolution of C and C++ during that time period, and we see C++ going through the same path in the C++0x days to then recover spectacularly once C++11 got unstuck.


> You are being facetious. I mean, do you actually believe that the JVM exists in a context where Java does not exist? What does the J in JVM stand for?

Java runs on several things that are not the JVM. Android does not use JVM to run Java, and even Oracle is pushing something that is not the JVM.

At the same time, JVM runs many things that are not Java.

If you are somehow implying along the lines of JVM only got initially authored because Java, then that is nothing but a historical fact of little relevance from the early days of the language. If not even Oracle considers Java and JVM one thing - and by virtue of Graal they don't - then it simply isn't as such.

> This analysis is very superficial and fails to identify any of the real world arguments to switch from C

You misread - what you quoted was not an analysis of why the migrations happen. It was a parallel, underlining that migrations do happen in spite of obvious feature differences (and sometimes, because of such differences).


Yeah, Kotlin is stuck in an uncomfortable position, like F# is in the .NET world. It has pioneered several important features, but now the big brother has implemented them slightly differently and people demand interop from you.

At least Kotlin can theoretically retreat to Android.


I did a decent amount of AoC this year in F#. I felt it was more verbose than I would have expected. There were a lot of things that helped brevity; I really liked type definitions, unless I was using OO features where it was extremely verbose to define members. I also really didn't like having to to Seq.map or List.filter for everything instead of just calling methods off of the seq or list.

From my POV having worked at a giant 50,000 person tech org—primarily Java—Kotlin was the kick in the pants that ushered in a lot of changes in the post-Java 11 world. We were on the verge of migrating wholesale to Kotlin after our CTO wrote an internal whitepaper in favor of the direction.

Out of interest

> As a programmer that picks up a new language every 2-3 years (and one that is privileged to have an employer that tolerates this)

does this mean they allow you to tinker around on your own for this, or do you actually then start to deploy things to production written in the new language.

After having quite a long career as a programmer, I realised that if I were ever CTO at a startup, unless there was an absolute proven need to switch languages, I'd mandate only a single language for our (back end) stack. The cost of supporting different languages is just too high.


I do deploy things in different languages -- We are a small team of open-minded programmers, and we are on a constant search for better tools and methods. I work for a robotics company (and have for many years), and having the flexibility to use Pion WebRTC (in Go) or PCL (in C++) or PyTorch (in Python) outweighs the cost of having software written in multiple languages.

> I do deploy things in different languages -- We are a small team of open-minded programmers, and we are on a constant search for better tools and methods.

This claim does not pass the smell test. Tech sprawl is a widely recognized problem, and dumping codebases each 2-3 years is outright unthinkable and pure madness. It doesn't even come across as resume-driven development because 3 years is not nearly enough to get anyone at a proficient level.

This claim is so outlandish that I'm inclined to dismiss it entirely as completely made-up. There is no project manager in the world who would even entertain this thought.


Your world seems small -- I never said anything about dumping codebases. If i have a library implemented in Rust, which gets called by a program written in C++ through FFI, and contacts a service which is implemented in python, I don't need to dump or waste any old work. A carpenter that only uses a hammer isn't a very good carpenter; different tools have different strengths and weaknesses.

  > you don't see Go ported to microcontrollers for instance.
AVRGo disagrees: https://github.com/avrgo-org/avrgo

No commit for 7 years but https://tinygo.org/docs/reference/microcontrollers/ is up to date.

> I think this is because of the gap in its target market

Surely that gap has been filled for at least a decade, even if only by Rust itself?

Moreover, I am not sure that serves as an explanation as it shows up in the strangest places. As you mention Go: Visit any discussion about Go and you'll find some arbitrary comment about Rust, even though, as you point out, they don't even exist in the same niche; being different tools for different jobs.

> Go was originally pitched as a C or C++ replacement

It was originally imagined that it would replace C++ for network servers at Google. The servers part was made abundantly clear. In fact, the team behind it expressed quite a lot of surprise that it eventually found a home doing other things as well.

> you don't see Go ported to microcontrollers for instance.

You don't? https://tinygo.org


> Surely that gap has been filled for at least a decade, even if only by Rust itself?

I think this is the argument made by the "Rust Evangelism Task Force" -- that Rust provides the features that C and C++ are missing. What i meant by "gap" is "the distance between C or C++ and Rust is greater then the distance between C++ and Go (in Go's target use case) or between Java and Kotlin". For the record, I do think all of these languages are great; I'm just trying to reason out the "rewrite it in Rust" mantra that has taken hold in many online communities, including this one.

> You don't? https://tinygo.org

I wasn't aware of this, thank you.


> the distance between C or C++ and Rust is greater then the distance between C++ and Go (in Go's target use case) or between Java and Kotlin

What, exactly, does distance mean here?

The other explicitly told design consideration for Go was for it to "feel like a dynamically-typed language with statically-typed performance". In other words, the assumption was that Googlers were using C++ for network servers not because of C++, but because something like Python (of which Google was the employer of van Rossum at the time!) was too slow. Go was created to offer something more like Python but with performance more like C++. It was a "C++ replacement" only in the sense that C++ is what Google was using where it was considered the "wrong tool for the job". Keep in mind that Go was created before we knew how to make actually dynamically-typed languages fast.

Putting things into perspective, the distance between C++ and Go is approximately the same as the distance between C++ and Python. Which is a pretty big distance, I'd say. C, C++, and Rust are much closer. They are all trying to do essentially the same thing, with Rust only standing out from the other two thanks to its at-the-time unique memory model. So it is apparent that we still don't understand "gap" to mean the same thing.


How I interpret his comment about the distance: The benefit of switching from C/C++ to Rust is higher than switching from C++ to Go (in the similar use-cases) or from Java to Kotlin.

Another argument offered for Rust is that it's high-level enough that you can also use it for the web (see how many web frameworks it has). So I think that Rust's proponents see it as this universal language that could be good for everything.


> The benefit of switching from C/C++ to Rust is higher than switching from C++ to Go

Ten years ago the memory model was a compelling benefit, sure, but nowadays we have Fil-C, that C++ static analyzer posted here yesterday, etc. There is some remaining marginal benefit that C and C++ still haven't quite caught up with yet, but is that significantly smaller and continually shrinking gap sufficient to explain things as they stand today?

You are right that the aforementioned assumption did not play out in the end. It turns out that C++ developers did, in fact, choose C++ because of C++ and would have never selected Python even if Python was the fastest language out there. Although, funnily enough, a "faster Python" ended up being appealing to Python developers so Go does ultimately have the same story, except around Python (and Ruby) instead of C++.

> Another argument offered for Rust is that it's high-level enough that you can also use it for the web

It was able to do that ten years ago just as well. That doesn't really explain things either.


> Keep in mind that Go was created before we knew how to make actually dynamically-typed languages fast.

Would you mind elaborating on this? The strongtalk heritage of VMs has been around for a while now, and certainly before go was started.


>To summarize, Rust provides a lot of compile-time discipline that C and C++ are lacking, and many people are tired of maintaining code that was developed without that discipline. Rust makes it harder to write low-effort software.

This doesn't explain why so many rust activists are going to projects they have no involvement in and demanding they be rewritten in rust.

What's happening is that there are progressive minded people who have progressive minded tactics, where they have a cause and everywhere they go they push their cause regardless of whether the places they are going have anything to do with their cause.


Is it really activism though, i.e. a concerted effort to put pressure on project leaders and make actual "demands"? Or is it just the occasional young Rust enthusiast asking questions or making the case for Rust?

You haven't been getting the checks? Bring that up at our next secret cabal meeting.

They're borrow checks

Probably something in-between: a self-organizing cult with too much support from industry.

Kotlin won't replace Java. They do not have the same niche.

Kotlin's Niche was to replace LEGACY Java with something that builds for older versions of Java but also gives you features and ergonomics not available for those features. There's a ton of super legacy Java out there.

> Kotlin won't replace Java. They do not have the same niche.

Claiming Java has a niche is very funny. I guess the niche is programmable computers? Well done.


Actually, GO is ported to microcontrollers with Tinygo. Even u-root works on microcontrollers nowadays.

For the more bulky processors, there's also tamago.


> Rust is firmly positioned to replace C and C++

I can't wait to see results. Until now, the only real world usage was for coreutils in Ubuntu, with disastrous consequences.

Anyway, without writing the OS in Rust, Rust will always be a second tier language.


> I can't wait to see results. Until now, the only real world usage was for coreutils in Ubuntu

I spent five years working at a company (Materialize) whose main product is entirely in Rust. Since then I work at a company (Polar Signals) where sadly I have to use C and Go, but the main backend storage layer is in Rust. And several of our customers use Rust and it would be a show-stopping bug for them if our product stopped working on Rust codebases.

Besides all that, plenty of companies you’ve heard of are now writing large amounts of new code in Rust — most notably Meta and Amazon. Large parts of Firefox are in Rust and have been for years.

Ubuntu coreutils is underselling it a bit.


> Until now, the only real world usage was for coreutils in Ubuntu,

This is simply not true, there are millions of lines of Rust code running in production at the largest tech companies in the world.


Carbon is what could someday be a real successor to C++ in my eyes. It actually focuses on being compatible with C++ code similar to how Kotlin can work with Java directly, except it will make a lot more sense once it is stable and usable, I just hope its not a 'forever-project' like Fuchsia.

> I think this is because of the gap in its target market -- Rust is firmly positioned to replace C and C++, which have a long history of safety issues.

The "long history of safety issues" is actually a combination of being extremely successful (the world runs on C and C++) and production software always featuring bugs.

The moment Rust started to gain some traction, we immediately started seeing CVEs originating from Rust code.


Java has features that Kotlin does not have. Virtual Threads and the API's that support then are effectively incompatible with the Kotlin co-routine libraries.

technically, Kotlin can use VirtualThreads like it can use any other Java Api (when being compiled to JVM bytecode).[1] If I remember correctly the Kotlin team was e.g. thinking about implementing a Coroutine Dispatcher with VirtualThreads. So building a Kotlin service that uses Virtual Threads instead of coroutines is -in principle- possible. But if you've got a Java project using Virtual Threads you could rewrite it slowly -class for class- to Kotlin (and later probably refactor the VirtualThreads to coroutines). What you have to keep in mind though: if you're creating a Kotlin lib that is meant to be used by Java then the public Api of this lib should include suspend functions! My company made that mistake at some point, they ordered a lib from an external supplier and since Java and Kotlin are both first class languages and "Kotlin is compatible with Java", they ordered this lib in Kotlin. They learned the hard way that Kotlin suspend methods translate to Java methods returning a Continuation class (!? If I remember correctly. I wasn't part of that project, but heard their story) which is very unergonomic to use manually. The fix was to write a small Kotlin shim lib that wrapped the other lib and replaced every suspend function with a normal function invoking their suspend counterpart in a runBlocking-block (I think). Hardly ideal, but workable. So yes, writing a Kotlin lib that's (also) meant to be consumed from Java requires more thought than one that is only meant to be used by Kotlin. (There is functionality in Kotlin to help with this though, e.g. look up the annotations @JvmStatic, @JvmName, @JvmRecord.)

[1] https://void2unit.onrender.com/post/virtualthreads-in-kotlin...


Love to see an AI agent to auto transpile C (sqlite, apache, nginx, mariadb, etc) into rust and run all the regression associated tests and perf benchmarks and produce report on the porting processes and perf delta.

Sqlite's test suite is closed source, so no one other than the Sqlite authors can attempt that. That said, you may be interested in this attempt by Turso to rewrite Sqlite in Rust (https://turso.tech/blog/we-will-rewrite-sqlite-and-we-are-go...). They're not using AI, but they are using some novel ways to test their code.

I'm not a Rust evangelist, but I'm glad that Rust evangelists exist.

I decided to try it for a medium-sized (~10k LoC) performance sensitive component recently and it has been an absolute joy to use.


I'm mostly a Java dev, but baby-stepping Rust has been a lot of fun and reminds me, in a very good way, of the feeling I had in the late 90's when I was first learning Java.

Well that's because C, and C++, are uniquely awful and Rust can actually take them on.

Kotlin doesn't have a strong case for replacing java because java is, well, just fine. At least it's safe. Sure it's, like, slightly inconvenient sometimes.

And other languages like Go which originally claimed to take on C and C++ just don't. Go is garbage collected, it's not a real competition.

But Rust is different. It's actually safe, and it's actually a real competitor. There's basically zero reason to choose C other than "I know it" or "one of my library author's knows it". Which are both very good reason, but incidently have nothing to do with the language itself.


Go has already more or less won its target market. Rust proponents are still working to convince C and C++ holdouts, who are understandably skeptical given the past several decades of promised replacements that never materialized.

Rust is absolutely novel in being the first, production-grade memory-safe low-level language.

I think that's true - but I guess Rust and Zig are unique in that list on being new memory-managed languages (i.e. without garbage collectors).

Low level memory-managed languages have been C and C++ mostly for a really long time, so I think Rust and Zig probably seem "more new" than the likes of Kotlin, Go, Elixir, Gleam, etc.


Can I start then with Scala - it's my favorite language and easily has the best of both OO and functional worlds with insanely useful libraries to express code ergonomically. EVERYBODY SHOULD USE IT!

Nah it’s too complex, has shipped too many breaking changes, and the community sucks.

I worked on a Scala project about 15 years ago and it definitely felt overly complex. Similar to early C++, everyone used their own dialect.

Always gotta get my yum yucked.

Just been screwed too many times I guess. I do like it more than Kotlin though. The language is powerful.

Yeah, Julia was all the rage for a while, and that kind of disappeared.

Some languages, like elixir, stick around with a low-volume, but consistently positive mention on HN. Which makes me want to use it more.


> I think it's more than just the normal amount for advocacy of a new language.

More than C++? More than Java? More than Python?


Yeah, I agree with this observation. Not sure why Rust is different here though.

I understand "shouldn't really have any say on it" as shouldn't expect to infuence the project. Not that they are not allowed to say anything.

Otherwise they would have written something along the lines of "shouldn't say anything about it".


I am pretty certain Rust is pushed more than other languages. Whether warranted or not is another topic, but I think the OP has a point here.

> People outside the project are allowed to say whatever the hell they want, the project doesn't have to listen.

Within reason - don't be a dick and all that. :)


"having a say on something", in OP's context, means authority and influence over decisions... People can say whatever they want, yes, but not everyone can "have a say on something".

Rust is pushed on the internet

I definitely wouldn't say internet, I think it's popular on HN and a few other online forums. There are a lot of X/twitter circles which make are critical of rust, as well as other sites.

In my mind at least there's a decent risk Rust is going to end up like the next Haskell, its benefits other than safety are not that clear and many of those features can and have been replicated in other languages.


Many of its biggest benefits of rust come directly from other languages - including Haskell. Like, rust’s Option is identical to Haskell’s Maybe type. Traits are similar to type classes in Haskell, or interfaces in Go.

In my mind, the thing that makes rust and zig nice are that they put modern language features in a systems language. Non-nullable pointers and match expressions in a language that runs as fast as C? Yes please.

I love rust, but I personally doubt rust will ever be anywhere near as popular as Python, go, JavaScript and C#. It’s just so complex and difficult to learn. But its niche is pretty clear to me: I see it as a tool for writing beautiful, correct, memory safe, performant systems code. I used to love C. But between zig, rust and Odin, I can’t see myself ever using it again. C is just so much less productive and less pleasant to use than more modern languages.


> People outside the project are allowed to say whatever the hell they want,

And? GP didn't say that they shouldn't.


> People outside the project are allowed to say whatever the hell they want

What these people do is a disservice to the general open source community, by spreading misinformation and general FUD about critical software that uses C and C++.


> There is literally nothing strange or disproportionate. It's incredibly obvious that new languages, that were designed by people who found older languages lacking, are of interest to groups of people interested in new applications of technology and who want to use their new languages.

This is a disingenuous opinion. The level of militance involved in this propaganda push goes way beyond mere interest. Just look at people who actually enjoy Java, C++, Python, etc. These are the most popular languages that mankind ever developed,and countless people built very successful careers around them. Yet, you don't see even a fraction of the fanboys you see constantly pushing these experimental languages.


And here I was just reading in another thread that HN was so much less toxic than other places...

Where do you see people having toxic debates?

It is also worth to note that the Rust design, in its theory, and the recent bug in the Linux kernel Rust code (the message passing abstraction used by Android), makes clear that:

1. With Rust, you may lower the exposure, but the same classes of bug still remain. And of course, all the other non memory related bugs.

2. With C you may, if you wish, develop a big sensibility to race conditions, and stay alert. In general it is possible that C programmers have their "bugs antenna" a bit more developed than other folks.

3. With Rust, to decrease the "unsafe" sections amount, you need often to build abstractions that may be a bit unnatural.

4. Rust may create a false sense of security, and in the unsafe sections the programmer sometimes, when reviewing the code, is falsely convinced by the mandatory SAFETY comment. Like in the Linux Kernel bug, where such comment was hallucinated by a human that sometimes (not sure in this specific case, it's just an example) may be less used to do the "race spotting" process that C learns you to do.

5. With Rust, in case of a bug, the fix could no longer be the one-liner usually you see in C fixes, and can make the exposure time window larger. Sometimes fixing things in Rust means refactoring in non trivial ways.

6. With C, if there was the same amount of effort in creating wrappers to make kernel programming safer at the cost of other things, the surface of attack could also be lowered in a significant way (see for instance Redis use of sds.c: how many direct strings / pointers manipulation we avoid? The same for other stuff of course). Basically things like sds.c let you put a big part of the unsafe business in a self contained library.

So, is Rust an interesting language for certain features it has? Yes. Is Rust a silver bullet? No. So should Rust be "pushed" to others, hell no, and I suggest you to reply in the most firm way to people stressing you out to adopt Rust at all the costs.


The recent bug in the Linux kernel Rust code, based on my understanding, was in unsafe code, and related to interop with C. So I wouldn't really classify it as a Rust bug. In fact, under normal circumstances (no interop), people rarely use unsafe in Rust, and the use is very isolated.

I think the idea of developers developing a "bugs antenna" is good in theory, though in practice the kernel, Redis, and many other projects suffer from these classes of bugs consistently. Additionally, that's why people use linters and code formatters even though developers can develop a sensitivity to coding conventions (in fact, these tools used to be unpopular in C-land). Trusting humans develop sensibility is just not enough.

Specifically, about the concurrency: Redis is (mostly) single-threaded, and I guess that's at least in part because of the difficulty of building safe, fast and highly-concurrent C applications (please correct me if I'm wrong).

Can people write safer C (e.g. by using sds.c and the likes)? For sure! Though we've been writing C for 50+ years at this point, at some point "people can just do X" is no longer a valid argument. As while we could, in fact we don't.


I hear "people rarely use unsafe rust" quite a lot, but every time I see a project or library with C-like performance, there's a _lot_ of unsafe code in there. Treating bugs in unsafe code as not being bugs in rust code is kind of silly, also.

Exactly. You don't need much unsafe if you use Rust to replace a Python project, for instance. If there is lower level code, high performances needs, things change.

For replacing a Python project with Rust, unsafe blocks will comprise 0% of your code. For replacing a C project with Rust, unsafe blocks will comprise about 5% of your code. The fact that the percentage is higher in the latter case doesn't change the fact that 95% of your codebase is just as safe as the Python project would be.

A big amount of C code does not do anything unsafe as well, it calls other stuff, do loops, logic business, and so forth. It is also wrong to believe 100% of the C code is basically unsafe.

If so, then it should be trivial for someone to introduce something like Rust's `unsafe` keyword in C such that the unsafe operations can be explicitly annotated and encapsulated.

Of course, it's not actually this trivial because what you're saying is incorrect. C is not equipped to enforce memory safety; even mundane C code is thoroughly suffused with operations that threaten to spiral off the rails into undefined behavior.


It is not so hard to introduce a "safe" keyword in C. I have a patched GCC that does it. The subset of the language which can be used safety is a bit too small to be full replacement on its own, but also not that small.

C lacks safe primitives or non-error-prone ways to build abstractions to refer to business objects. There are no safe string references, let along ways to safely manipulate strings. Want to iterate over or index into a result set? You can try to remember to put bounds checks into every API function.

But even with explicit bounds checks, C has an ace up its sleeve.

    int cost_of_nth_item(int n) {
        if (n < 0 || n >= num_items)
            return -1;  // error handling
        …
    }
Safe, right? Not so fast, because if the caller has a code path that forgets to initialize the argument, it’s UB.

Almost all of C code does unsafe things. Deferencing a pointer is unsafe, using the address of a variable is unsafe, adding signed integers is unsafe.

Who is saying that 100% of C code is unsafe? It's potentially unsafe, as in: the mainstream compilers are unable to prove the code is memory-safe.

Rust achieves a sizable but not complete victory on that front.

I can't find the extreme claims that you seem to argue against.


You're swapping definitions of unsafe. Earlier you were referring to the `unsafe` keyword. Now you're using `unsafe` to refer to a property of code. This makes it easy to say things like "It is also wrong to believe 100% of the C code is basically unsafe" but you're just swapping definitions partway through the conversation.

What I see is that antirez claims that absence of "safe" (as syntax) in C lang doesn't automatically mean that all of C code is unsafe (as property). There's no swapping of definitions as I see it.

I think there's a very clear switch of usage happening. Maybe it's hard to see so I'll try to point out exactly where it happens and how you can spot it.

First from antirez:

> You don't need much unsafe if you use Rust to replace a Python project, for instance. If there is lower level code, high performances needs, things change.

Use of the term `unsafe` here referring to the keyword / "blocks" of code. Note that this statement would be nonsensical if talking about `unsafe` as a property of code, certainly it would be inconsistent with the later unsafe since later it's claimed that C code is not inherently "unsafe" (therefor Rust would not be inherently "unsafe").

Kibwen staying on that definition here:

> For replacing a Python project with Rust, unsafe blocks will comprise 0% of your code. For replacing a C project with Rust, unsafe blocks will comprise about 5% of your code.

Here is the switch:

> A big amount of C code does not do anything unsafe as well

Complete shift to "unsafe" as being a property of code, no longer talking about the keyword or about blocks of code. You can spot it by just rewriting the sentences to use Rust instead of C.

You can say:

"A big amount of 'unsafe' Rust code does not do anything unsafe as well" "It is also wrong to believe 100% of the unsafe Rust code is basically unsafe."

I think that makes this conflation of terms clear, because we're now talking about the properties of the code within an "unsafe" block or globally in C. Note how clear it is in these sentences that the term `unsafe` is being swapped, we can see this by referring to "rust in unsafe blocks" explicitly.

This is just a change of definitions partway through the conversation.

p.s. @Dang can you remove my rate limit? It's been years, I'm a good boy now :)


Except that's a dishonest interpretation especially for someone of antirez's experience.

High performance is not an on/off target. Safe rust really lets you express a lot of software patterns in a "zero-cost" way. Sure, there are a few patterns where you may need to touch unsafe, but safe rust itself is not slow by any means.

For your last sentence, I believe topics are conflated here.

Of course if one writes unsafe Rust and it leads to a CVE then that's on them. Who's denying that?

On the other hand, having to interact with the part of the landscape that's written in C mandates the use of the `unsafe` keyword and not everyone is ideally equipped to be careful.

I view the existence of `unsafe` as pragmatism; Rust never would have taken off without it. And if 5% of all Rust code is potentially unsafe, well, that's still much better than C where you can trivially introduce undefined behavior with many built-in constructs.

Obviously we can't fix everything in one fell swoop.


>>Of course if one writes unsafe Rust and it leads to a CVE then that's on them. >>Who's denying that?

>>The recent bug in the Linux kernel Rust code, based on my understanding, was >>in unsafe code, and related to interop with C. So I wouldn't really classify >>it as a Rust bug.

Sometimes it's good to read the whole thread.


I did and it does not quite compute. That was glue code, related to interoperating with C. Not a "normal" everyday Rust code. It's an outlier.

Helps to read and ingest context.

Though I do agree that in the strictest of technical senses it's indeed a "Rust" bug, as in: bug in code written in Rust.


Why is glue code not normal code in Rust? I don't think anyone else would say that for any other language out there. Does it physically pain you to admit it's a bug in Rust code? I write bugs in all kind of languages and never feel the need for adjectives like "technical", "normal", "everyday" or words like "outlier" to make me feel not let down by the language of choice.

I have worked with Rust for ~3.5 years. I had to use the `unsafe` keyword, twice. In that context it's definitely not everyday code. Hence it's difficult to use that to gauge the language and the ecosystem.

Of course it's a bug in Rust code. It's just not a bug that you would have to protect against often in most workplaces. I probably would have allowed that bug easily because it's not something I stumble upon more than once a year, if even that.

To that effect, I don't believe it's fair to gauge the ecosystem by such statistical outliers. I make no excuses for the people who allowed the bug. This thread is a very good demonstration as to why: everything Rust-related is super closely scrutinized and immediately blown out of proportion.

As for the rest of your emotionally-loaded language -- get civil, please.


I don't care if there can be a bug in Rust code. It doesn't diminish the language for me. I don't appreciate mental gymnastics when evidence is readily available and your comments come out as compulsive defense of something nobody was really is attacking. I'm sorry for the jest in the comments.

I did latch onto semantics for a little time, that much is true, but you are making it look much worse than it is. And yes I get a PTSD and an eye-roll-syndrome from the constant close scrutiny of Rust even though I don't actively work with it for a while now. It gets tiring to read and many interpretations are dramatically negative for no reason than some imagined "Rust zealots always defending it" which I have not seen in a long time here on HN.

But you and me seem to be much closer in opinion and a stance than I thought. Thanks for clarifying that.


The bug in question is in rust glue code that interfaces with a C library. It's not in the rust-C interface or on the C side. If you write python glue code that interfaces with numpy and there's a bug in your glue, it's a python bug not a numpy bug.

I already agreed that technically it is indeed a bug in the Rust code. I would just contest that such a bug is representative is all. People in this thread seem way too eager to extrapolate which is not intellectually curious or fair.

Nobody is extrapolating from this bug to the rest of rust. The comment I responded to initially was denying that this was a rust bug.

You and a few others don't -- I did not make that clear, apologies. It's disheartening that a good amount of others do.

In Rust you can avoid "unsafe" when you use Rust like it was Go or Python. If you write low level code, that is where C is in theory replaceable only by Rust (and not by Go), then you find yourself in need of writing many unsafe sections. And to lower the amount of unsafe sections, you have to build unnatural abstractions, often, in order to group such unsafe sections into common patterns. Is is a tradeoff, not a silver bullet.

Not necessarily at all. Go peruse the `regex` crate source code, including its dependencies.

The biggest `unsafe` sections are probably for SIMD accelerated search. There's no "unnatural abstractions" there. Just a memmem-like interface.

There's some `unsafe` for eliding bounds checks in the main DFA search loops. No unnatural abstractions there either.

There's also some `unsafe` for some synchronization primitives for managing mutable scratch space to use during a search. A C library (e.g., PCRE2) makes the caller handle this. The `regex` crate does it for you. But not for unnatural reasons. To make using regexes simpler. There are lower level APIs that provide the control of C if you need it.

That's pretty much it. All told, this is a teeny tiny fraction of the code in the `regex` crate (and all of its dependencies).

Finally, a demonstration of C-like speed: https://github.com/BurntSushi/rebar?tab=readme-ov-file#summa...

> Is is a tradeoff, not a silver bullet.

Uncontroversial.


I think this framing is a bit backwards. Many C programs (and many parts of C programs) would benefit from being more like Go or Python as evident by your very own sds.c.

Now, if what you're saying is that with super highly optimized sections of a codebase, or extremely specific circumstances (some kernel drivers) you'd need a bit of unsafe rust: then sure. Though all of a sudden you flipped the script, and the unsafe becomes the exception, not the rule; and you can keep those pieces of code contained. Similarly to how C programmers use inline assembly in some scenarios.

Funny enough, this is similar to something that Rust did the opposite of C, and is much better for it: immutable by default (let mut vs. const in C) and non-nullable by default (and even being able to define something as non-null). Flipping the script so that GOOD is default and BAD is rare was a huge win.

I definitely don't think Rust is a silver bullet, though I'd definitely say it's at least a silver alloy bullet. At least when it comes to the above topics.


In my experience (several years of writing high performance rust code), there’s only really 2 instances where you need unsafe blocks:

- C interop

- Low level machine code (eg inline assembly)

Most programs don’t need to do either of those things. I think you could directly port redis to entirely safe rust, and it would be just as fast. (Though there will need be unsafe code somewhere to wrap epoll).

And even when you need a bit of unsafe, it’s usually a tiny minority of any given program.

I used to think you needed unsafe for custom container types, but now I write custom container types in purely safe rust on top of Vec. The code is simpler, and easier to debug. And I’m shocked to find performance has mostly improved as a result.


> was in unsafe code, and related to interop with C

1) "interop with C" is part of the fundamental requirements specification for any code running in the Linux kernel. If Rust can't handle that safely (not Rust "safe", but safely), it isn't appropriate for the job.

2) I believe the problem was related to the fact that Rust can't implement a doubly-linked list in safe code. This is a fundamental limitation, and again is an issue when the fundamental requirement for the task is to interface to data structures implemented as doubly-linked lists.

No matter how good a language is, if it doesn't have support for floating point types, it's not a good language for implementing math libraries. For most applications, the inability to safely express doubly-linked lists and difficulty in interfacing with C aren't fundamental problems - just don't use doubly-linked lists or interface with C code. (well, you still have to call system libraries, but these are slow-moving APIs that can be wrapped by Rust experts) For this particular example, however, C interop and doubly-linked lists are fundamental parts of the problem to be solved by the code.


> If Rust can't handle that safely (not Rust "safe", but safely), it isn't appropriate for the job.

Rust is no less safe at C interop than using C directly.


As long as you keep C pointers as pointers. The mutable aliasing rules can bite you though.

(Not the user you were replying to)

If Rust is no less safe than C in such a regard, then what benefit is Rust providing that C could not? I am genuinely curious because OS development is not my forte. I assume the justification to implement Rust must be contingent on more than Rust just being 'newer = better', right?


It's not less safe in C interop. It is significantly safer at everything else.

The issue is unrelated to expressing linked lists, it's related to race conditions in the kernel, which is one of the hardest areas to get right.

This could have happened with no linked lists whatsoever. Kernel locks are notoriously difficult, even for Linus and other extremely experienced kernel devs.


> This is a fundamental limitation

Not really. Yeah you need to reach into unsafe to make a doubly linked list that passes borrow checker.

Guess what. You need unsafe implementation to print to console. Doesn't mean printing out is unsafe in Rust.

That's the whole point of safe abstraction.


I love rust, but C does make it a lot easier to make certain kinds of container types. Eg, intrusive lists are trivial in C but very awkward in rust. Even if you use unsafe, rust’s noalias requirement can make a lot of code much harder to implement correctly. I’ve concluded for myself (after a writing a lot of code and a lot of soul searching) that the best way to implement certain data structures is quite different in rust from how you would do the same thing in C. I don’t think this is a bad thing - they’re different languages. Of course the best way to solve a problem in languages X and Y are different.

And safe abstractions mean this stuff usually only matters if you’re implementing new, complex collection types. Like an ECS, b-tree, or Fenwick tree. Most code can just use the standard collection types. (Vec, HashMap, etc). And then you don’t have to think about any of this.


>> I guess that's at least in part because of the difficulty of building safe, fast and highly-concurrent C applications (please correct me if I'm wrong).

You wrote that question in a browser mostly written in C++ language, running on an OS most likely written in C language.


Just because the pyramids exist, it means they were easy to build?

OS and browser development are seriously hard and took countless expert man hours.


OS can be actually pretty simple to make. Sometimes it's a part of a CS curriculum to make one. If it were so much easier to do it in other languages (e.g. in Rust), don't you think we would already be using them?

https://github.com/flosse/rust-os-comparison

Writing a toy one? Sure.

Writing a real one? Who's gonna write all the drivers and the myriad other things?

And the claim was not that it's "so much easier", but that it is so much easier to write it in a secure way. Which claim is true. But it's still a complex and hard program.

(And don't even get started on browsers, it's no accident that even Microsoft dropped maintaining their own browser).


The toy one can still be as highly concurrent as the the real one. The amount of drivers written for it doesn't matter.

The point is if it were much easier, then they would overtake existing ones easily, just by adding features and iterating so much faster and that is clearly not the case.

>>difficulty of building safe, fast and highly-concurrent C

This was the original claim. The answer is, there is a tonne of C code out there that is safe, fast and concurrent. Isn't it logical? We have been using C for the last 50 years to build stuff with it and there is a lot of it. There doesn't seem to be a big jump in productivity with the newer generation of low level languages, even though they have many improvements over C.

This is anecdotal, I used to do a lot of low level C and C++ development. And C++ is a much bigger language then C. And honestly I don't think I was ever more productive with it. Maybe the code looked more organized and extendable, but it took the same or larger amount of time to write it. On the other hand when I develop with Javascript or C#, I'm easily 10 times more productive then I would be with either C or C++. This is a bit of apples and oranges comparison, but what I'm trying to say is that new low level languages don't bring huge gains in productivity.


> With C you may, if you wish, develop a big sensibility to race conditions, and stay alert. In general it is possible that C programmers have their "bugs antenna" a bit more developed than other folks.

I suppose it's possible. I wonder if I'll become a better driver if I take off my seatbelt. Or even better, if I take my son out of my car seat and just let him roam free in the back seat. I'm sure my wife will buy this.

In all seriousness, your comment reminds me of this funny video: https://www.youtube.com/watch?v=glmcMeTVIIQ

It's nowhere near a perfect analogy, but there are some striking similarities.


Human behaviour can be a confounding thing. There was some debate a while ago [1] about whether bike helmet use may actually lead more head injuries due to factors like drivers passing closer to helmeted riders vs. unhelmeted ones or riders riding more recklessly, among a tonne of other factors. I still prefer to wear a helmet, but its an interesting example of how difficult it can be to engineer human behaviour.

Another good example of this is how civil engineers adding safety factors into design of roads - lane widths, straighter curves, and so on - leading drivers to speed more and decreasing road safety overall.

1. https://bigthink.com/articles/the-bike-helmet-paradox/


FWIW, FAFO is a very good way to learn. Assuming we can respawn indefinitely and preserve knowledge between respawns, driving fast and taking off your seatbelt would definitely teach you more than just reading a book.

But in this specific case, if the respawn feature is not available or dying isn't a desirable event, FAFO might not be the best way to learn how to drive.


I also think we have the data in for memory safety in C. Even the best people, with the best processes in the world seem to keep writing memory safety bugs. The “just be more vigilant” plan doesn’t seem to work.

> FWIW, FAFO is a very good way to learn. Assuming we can respawn indefinitely and preserve knowledge between respawns, driving fast and taking off your seatbelt would definitely teach you more than just reading a book.

Yes, just sucks for the person who you hit with your car, or the person whose laptop gets owned because of your code.

"FAFO" is not a great method of learning when the cost is externalized.


> With C you may, if you wish, develop a big sensibility to race conditions, and stay alert. In general it is possible that C programmers have their "bugs antenna" a bit more developed than other folks.

I think there are effects in both directions here. In C you get burned, and the pain is memorable. In Rust you get forced into safe patterns immediately. I could believe that someone who has done only Rust might be missing that "healthy paranoia". But for teaching in general, it's hard to beat frequent and immediate feedback. Anecdotally it's common for experienced C programmers to learn about some of the rules only late in their careers, maybe because they didn't happen to get burned by a particular rule earlier.

> Rust may create a false sense of security, and in the unsafe sections the programmer sometimes, when reviewing the code, is falsely convinced by the mandatory SAFETY comment.

This is an interesting contrast to the previous case. If you write a lot of unsafe Rust, you will eventually get burned. If you're lucky, it'll be a Miri failure. I think this makes folks who work with unsafe Rust extremely paranoid. It's also easier to sustain an that level of paranoia with Rust, because you hopefully only have to consider small bits of unsafe code in isolation, and not thousands of lines of application logic manipulating raw pointers or whatever.


The amount of paranoia I need for unsafe Rust is orders of magnitudes higher than C. Keeping track of the many things that can implicity drop values and/or free memory, and figuring out if im handling raw pointers and reference conversions in a way that doesn't accidentally alias is painful. The C rules are fewer and simpler, and are also well known, and are aleviated and documented by guidelines like MISRA. Unsafe Rust has more rules, which seem underspecified and underdocumented, and also unstable. Known unknowns are preferable over unknown unknowns.

A quick unscientific count on cve.org counts ~86 race condition CVEs in the Linux kernel last year, so you might be overstating how well bug antennas work.

If the kernel was completely written in Rust, we could have a lot of unsafe places, and many Rust CVEs. It is hard to tell, and the comparison in theory should be made after the kernel is developed only by people lacking the C experience that made the current developers so able to reason about race conditions (also when they write Rust).

That's quite the double standard. You extrapolate from one single Rust bug, but insist that "it's hard to tell" and you need completely unrealistic levels of empirical evidence to draw conclusions from the reported C bugs...

Reminds me of this classic: "Beware Isolated Demands For Rigor" (https://slatestarcodex.com/2014/08/14/beware-isolated-demand...)


86 race conditions compared to what baseline? This is a bit meaningless without benchmarking against other kernels

It's 1 compared to 86, 86 is the baseline.

But you need to control for lines of code at the very least — otherwise you're comparing apples to oranges

I'm perfectly happy to say that it's not a very good way to make a comparison.

Then it would not be unscientific.

Yeah I mean I could also say "there are no CVEs written in PERL in the kernel ergo PERL is safer to write than Rust". Given there's close to zero .pl files in the kernel, I think we can all agree my assertion holds

That claim relies on an absurd "in the kernel" qualifier, making it difficult to agree with. Furthermore, your hypothesis is that "we all" agree with claims that rely on absurd conditions as a matter of course.

That is no base line. That is a comparison with no statistical value.

Tbh I thought that was clear when I used the phrase "unscientific".

> In general it is possible that C programmers have their "bugs antenna" a bit more developed than other folks.

If that were truely the case, we wouldn’t need Rust now, would we!


Love it

(2) and (3) just don't seem to be the case empirically. One bug that was directly in a grep'able `unsafe` block is hardly evidence of these, whereas Google's study on Rust has demonstrated (far more rigorously imo) the opposite. I think anyone paying attention would have guessed that the first Rust CVE would be a race - it is notoriously hard to get locking/ race semantics correct in the kernel, not even the C programmers get it right, it's an extremely common bug class and I believe Linus has basically said something along the lines of "no one understands it" (paraphrasing).

(4) Again, this doesn't seem to be borne out empirically.

(5) I've seen plenty of patches to C code that are way more than a single line for the Linux kernel, but sure, maybe we grant that a bug fix in Rust requires more LOC changed? It'd be nice to see evidence. Is the concern here that this will delay patching? That seems unlikely.

It's not uncommon at all for patches to the C code in the kernel for "make this generally safe" are 1000s of lines of code, seeding things like a "length" value through code, and take years to complete. I don't think it's fair to compare these sorts of "make the abstraction safe" vs "add a single line check" fixes.

(6) Also not borne out. Literally billions spent on this.

> So, is Rust an interesting language for certain features it has? Yes. Is Rust a silver bullet? No.

Agreed. I'm tempted to say that virtually no one contests the latter lol

> So should Rust be "pushed" to others, hell no, and I suggest you to reply in the most firm way to people stressing you out to adopt Rust at all the costs.

I guess? You can write whatever you want however you want, but users who are convinced that Rust code will provide a better product will ask for it, and you can provide your reasoning (as SQLite does here, very well imo) as firm as you'd please I think.

edit: And to this other comment (I'm rate limited): https://news.ycombinator.com/item?id=46513428

> made the current developers so able to reason about race conditions (also when they write Rust).

Aha. What? Where'd you get this from? Definitely not from Linus, who has repeatedly stated that lock issues are extremely hard to detect ahead of time.

> we’ve tweaked all the in-kernel locking over decades [..] and even people who know what they are doing tend to get it wrong several times

https://lwn.net/Articles/808498/

Definitely one of MANY quotes and Linus is not alone.


Google have published a couple high-level Rust blog posts with many graphs and claims, but no raw data or proofs, so they haven’t demonstrated anything.

By now their claims keep popping up in Rust discussion threads without any critical evaluation, so this whole communication is better understood as a marketing effort and not a technical analysis.


> Google have published a couple high-level Rust blog posts with many graphs and claims, but no raw data or proofs, so they haven’t demonstrated anything.

Don't expect proofs from empirical data. What we have is evidence. Google has published far better evidence, in my view, than "we have this one CVE, here are a bunch of extrapolations".

> By now their claims keep popping up in Rust discussion threads without any critical evaluation,

Irrelevant to me unless you're claiming that I haven't critically evaluated the information for some reason.


It should not be strange that a tool which is better in every way and makes your code less buggy by default has its praises sung by most of the people who use it. It would be odd to go around saying 'electric drills are strangely and disproportionately pushed at Home Depot over the good old hand auger', and even if I don't work at your contracting company I'd be slightly unnerved about you working on my house.

I’ve heard this analogy used to justify firing developers for not using GenAI: a cabinet maker who doesn’t use power tools shouldn’t be working as a cabinet maker.

If only programming languages (or GenAI) were tools like hammers and augers and drills.

Even then the cabinets you see that come out of shops that only use hand tools are some of the most sturdy, beautiful, and long lasting pieces that become the antiques. They use fewer cuts, less glue, avoid using nails and screws where a proper joint will do, etc.


Less glue and avoidance of nails and screws doesn't make it sturdier. Fastening things strongly makes your furniture sturdier than not doing so. Antiques suck as often as they don't, and moreover you are only seeing the ones that survived without a base rate to compare it to; they succeeded in spite of power tools, but power tools would have made the same object better.

Comparing it to AI makes no sense. Invoking it is supposed to bring to mind the fact that it's worse in well-known ways, but then the statement 'better in every way' no longer applies. Using Rust passively improves the engineering quality compared to using anything else, unlike AI which sacrifices engineering quality for iteration speed.


> Less glue and avoidance of nails and screws doesn't make it sturdier. Fastening things strongly makes your furniture sturdier than not doing so.

No disrespect intended, but your criticism of the analogy reveals that you are speaking from assumptions, but not knowledge, about furniture construction.

In fact, less glue, and fewer fasteners (i.e. design that leverages the strength of the materials), is exactly how quality furniture is made more sturdy.


There was an interesting video on YT where an engineer from a fastener company joined a carpenter to compare their products with traditional joints.

The traditional joints held up very well and even beat the engineered connectors in some cases. Additionally one must be careful with screws and fasteners: if they’re not used according to spec, they may be significantly weaker than expected. The presented screws had to be driven in diagonally from multiple positions to reach the specified strength; driving them straight in, as the average DIYer would, would have resulted in a weak joint.

Glue is typically used in traditional joinery, so less glue would actually have a negative effect.


> Glue is typically used in traditional joinery, so less glue would actually have a negative effect.

And a lot of traditional joinery is about keeping the carcase sufficiently together even after the hide glue completely breaks down so that it can be repaired.

Modern glues allow you to use a lot less complicated joinery.


The thing is that more than a few people disagree that it is better in every way.

I'm very much into Rust but this article is precisely about the fact that Rust is not "better in every way"...

This article was written nine years ago, when Rust 1.0 was two years old, by an author who spent a small (but nonzero) amount of time evaluating Rust.

This page was last updated on 2025-05-09 15:56:17Z

Given the author's misunderstanding of what Rust provides, the most charitable interpretation is that they haven't updated the parts discussing Rust since 2017. If they had, it would reflect more poorly on them.

The most charitable interpretation of this is that the Reddit mods should stick to Reddit. If they had, this wouldn't have reflected so poorly on them.

Not true, the page is updated every now and again.

The issue is that Rust proponents automatically assume that if you write enough C code, there will be memory related bugs.

In reality, this is not the case. Bad code is the result of bad developers. Id rather have someone writing C code that understands how memory bugs happen rather than a Rust developer thinking that the compiler is going to take care of everything for them.


The topic seems to be native programming languages -- I don't think any of the languages concerned are "better in every way" for every possible coding problem. Many will rightfully choose Fortran over Rust for their application -- knowing full well their choice is far away from "better in every way".

When writing code meant to last, you need a language that’s portable across compilers and through time. C has demonstrated both. Fortran 77 and 90 were portable across compilers, but are now at risk from breaking changes, and later versions are not very portable across compilers.

Bad analogy.

If the alternative has drawbacks (they always do) or is not as well known by the team, it's perfecly fine to keep using the tool you know if it is working for you.

People who incessantly try to evangelise their tool/belief/preferences to others are often seen as unpleasant to say the least and they often achieve the opposite effect of what they seek.


of course there are drawbacks to power tools. you could run out of battery for example and now its useless.

but everyone with a brain knows the costs are worth the benefits.


I was talking in general if that escaped you. Hence "beliefs/preferences" and not only tools.

And when it comes to programming languages, it's not as clear cut. As exemplified by the article.

So the power tools is a poor analogy.


Is there still pressure to use OOP? On Hacker News, at least, the trend seems to be moving in the opposite direction. There’s growing skepticism toward OOP, and that’s reflected in the popularity of languages like Rust and Zig, which are explicitly designed to push against traditional object-oriented patterns.

That’s not to say OOP advocacy has disappeared from HN. It still exists, but it no longer feels dominant or ascendant. If anything, it feels like a legacy viewpoint maintained by a sizable but aging contingent rather than a direction the community is moving toward.

Part of OOP’s staying power comes from what I’d call a cathartic trap. Procedural programming is intuitive and straightforward. OOP, by contrast, offers a new conceptual lens: objects, inheritance, polymorphism, and eventually design patterns. When someone internalizes this framework, there’s often a strong moment of “clicking” where complex software suddenly feels explainable and structured. That feeling of insight can be intoxicating. Design patterns, in particular, amplify this effect by making complexity feel principled and universally applicable.

But this catharsis is easy to confuse with effectiveness. The emotional satisfaction of understanding a system is orthogonal to whether that system actually produces better outcomes. I’ve seen a similar dynamic in religion, where the Bible’s dense symbolism and internal coherence produce a powerful sense of revelation once everything seems to “fit” together. The feeling is real, but it doesn’t validate the underlying model.

In practice, OOP often increases complexity and reduces modularity. This isn’t always obvious from inside the paradigm. It tends to become clear only after working seriously in other paradigms, where composition, data-oriented design, or functional approaches make the tradeoffs impossible to ignore.


In my circles, there is one part of OOP seen as positive: encapsulation. Everything else, specially inheritance and partly polymorphism are seen extremely negatively. The hype is over. BUT: I still hear more often as I would like, some manager stating “of course we will use C++, because is THE OOP language, and everybody knows OOP and UML are the only right way of doing software” this is an actual verbatim statement I had to listen 4 years ago.

I mostly agree with your assessment of how OOP is viewed today. In many technical circles, inheritance is seen as actively harmful, and polymorphism is at best tolerated and often misunderstood. The hype is largely gone. I’ve also heard the same managerial rhetoric you mention, where OOP and UML are treated as unquestionable defaults rather than design choices, so that part unfortunately still resonates.

Where I disagree is on encapsulation being the “good” part of OOP.

Encapsulation, as a general idea, is positive. Controlling boundaries, hiding representation, and enforcing invariants are all valuable. But encapsulation as realized through objects is where the deeper problem lies. Objects themselves are not modular, and the act of encapsulating a concept into an object breaks modularity at the moment the boundary is drawn.

When you encapsulate something in OOP, you permanently bind state and the methods that mutate that state into a single unit. That decision fixes the system’s decomposition early and hardens it. Behavior can no longer move independently of data. Any method that mutates state is forever tied to that object, its invariants, and its lifecycle. Reuse and recomposition now operate at the object level rather than the behavior level, which is a much coarser and more rigid unit of change.

This is the core issue. Encapsulation in OOP doesn’t just hide implementation details; it collapses multiple axes of change into one. Data representation, behavior, and control flow are fused together. As requirements evolve, those axes almost never evolve in lockstep, but the object boundary forces them to.

What makes this especially insidious is that the failure mode is slow and subtle. OOP systems don’t usually fail immediately. They degrade over time. As new requirements arrive, developers feel increasing resistance when trying to adapt the existing design. Changes start cutting across object boundaries. Workarounds appear. Indirection layers accumulate. Eventually the system is labeled as having “too much tech debt” or being the result of “poor early design decisions.”

But this framing misses the point. The design mistakes were not merely human error; they were largely inevitable given the abstraction. The original object model could not have anticipated future requirements, and because it was not modular enough to allow the design itself to evolve, every change compounded rigidity. The problem wasn’t that the design was wrong. It’s that it was forced to be fixed.

Polymorphism doesn’t fundamentally resolve this, and often reinforces it. While polymorphism itself is not inherently object-oriented, in OOP it is typically expressed through stateful objects and virtual dispatch. That keeps behavior anchored to object identity and mutation rather than allowing it to be recomposed freely as requirements shift.

The deeper requirement is that a system must be modular enough not just to extend behavior, but to change its own design as understanding improves. Object-based encapsulation works directly against this. It locks in assumptions early and makes architectural change progressively more expensive. By the time the limitations are obvious, the system is already entangled.

So while I agree that inheritance deserves much of the criticism, I think encapsulation via objects is the more fundamental problem. It’s not that encapsulation is bad in principle. It’s that object-based encapsulation produces systems that appear well-structured early on, but inevitably accumulate rigidity and hidden coupling over time. What people often call “tech debt” in OOP systems is frequently just the unavoidable artifact of an abstraction that was never modular enough to begin with. OOP was the tech debt.

The way forward is actually simple. Avoid mutation as much as possible. Use static methods (aka functions) as much as possible. Segregate IO and mutation into its own module separate from all other logic. Those rules are less of a cathartic paradigm shift then OOP and it takes another leap to see why doing these actually resolves most of the issues with OOP.


> Where I disagree is on encapsulation being the “good” part of OOP. Encapsulation, as a general idea, is positive. Controlling boundaries, hiding representation, and enforcing invariants are all valuable. But encapsulation as realized through objects is where the deeper problem lies. Objects themselves are not modular, and the act of encapsulating a concept into an object breaks modularity at the moment the boundary is drawn.

I’m personally with you here. Just in my circle they see it positively. But I agree with you: as long as it helps modularity, great, but also have many downsides that you describe very well.

Your last paragraph also perfectly aligned with my views. I think you are coming from a functional PoV, which luckily seems to have some more traction in the last decade or two. Sadly, before you say it, often are underline the parts of functional programming that are not the most useful you address here… but maybe, some day…


> There’s growing skepticism toward OOP, and that’s reflected in the popularity of languages like Rust and Zig, which are explicitly designed to push against traditional object-oriented patterns.

I don't know about Zig, but my experience with Rust's trait system is that it isn't explicitly against OOP. Traits and generics feel like an extension and generalization of the OOP principles. With OOP, you have classes (types) and/or objects (instances) and bunch of methods specific to the class/object. In Rust, you extend that concept to almost all types including structs and enums.

> OOP, by contrast, offers a new conceptual lens: objects, inheritance, polymorphism, and eventually design patterns.

Rust doesn't have inheritance in the traditional sense, but most OOP languages prefer composition to data inheritance. Meanwhile, polymorphism, dynamic dispatch and design patterns all exist in Rust.


> I don't know about Zig, but my experience with Rust's trait system is that it isn't explicitly against OOP. Traits and generics feel like an extension and generalization of the OOP principles. With OOP, you have classes (types) and/or objects (instances) and bunch of methods specific to the class/object. In Rust, you extend that concept to almost all types including structs and enums.

That’s not oop. Traits and generics are orthogonal to oop. It’s because oop is likely where you learned these concepts so you think the inception of these things is derived from oop.

What’s unique to oop is inheritance and encapsulation.

Design patterns isn’t unique to OOP either but there’s a strong cultural association with it. The term often involves strictly using encapsulated objects as the fundamental building block for each “pattern”.

The origin of the term “design patterns” was in fact established in context of OOP through the famous book and is often used exclusively to refer to OOP but the definition of the term itself is more broad.


Context: Along with the never ending pressure to migrate a project to new shiny, there is a lot of momentum against C and other memory-unsafe languages.

The US government recently called on everyone to stop using them and move to memory-safe languages.

Regardless, there are practices and tools that significantly help produce safe C code and I feel like more effort should be spent teaching C programmers those.

Edit: Typos and to make the point that I'm not necessarily defending C, just acknowledging it's place. I haven't written a significant amount of C in over 2 decades, probably, aside from microcontroller C.


C was my first language, more than thirty years ago. I've heard (and probably myself made) the same arguments over and over and over. But those arguments are lame and wrong.

C cannot be made safe (at scale). It's like asbestos. In fact, C is a hazardous material in exactly the same way as asbestos. Naturally occurring, but over industrialized and deployed far too widely before its dangers were known. Still has its uses but it will fuck you up if you do not use industrial-grade PPE.

Stop using C if you can. Stop arguing other people should use it. There have always been alternatives and the opportunity cost of ecosystems continuing to invest in C has massive externalized costs for the entire industry and society as a whole.


> in exactly the same way

C is not known to the state of California to cause cancer.


Not yet

Asbestos causes mesothelioma and gruesome death. C does not. Be serious.

When C code is run in machines capable of failing with gruesome death, its unsafeness may indeed result in gruesome death.

> When C code is run in machines capable of failing with gruesome death, its unsafeness may indeed result in gruesome death.

And yet, it never does. It's been powering those types of machines likely longer than you have been alive, and the one exception I can think of where lives were lost, the experts found that the development process was at fault, not the language.

If it was as bad as you make out, we'd have many many many occurrences of this starting in the 80s. We don't.



Please don't post flamebait or FUD here. The Therac-25 was not programmed in C.

How was this flamebait? It is an example of how bad programming choices/assumptions/guardrails costs lives, a counterargument to the statement of 'And yet, it never does'. Splitting hairs if the language is C or assembly is missing the spirit of the argument, as both those languages share the linguistic footguns that made this horrible situation happen (but hey, it _was_ the 80s and choices of languages was limited!). Though, even allowing the "well ackuacally" cop-out argument, it is trivial to find examples of code in C causing failures due to out-of-bounds usage of memory; these bugs are found constantly (and reported here, on HN!). Now, you would need to argue, "well _none_ of those programs are used in life-saving tech" or "well _none_ of those failures would, could, or did cause injury", to which I call shenanigans. The link drop was meant to do just that.

HN is not for flamebait.

Stop spreading FUD.

But oddly enough, Zig is not a memory-safe language, and yet still heavily pushed on here. There are a number of measures, comparatively, that can be taken to make C safer too. The story on what can be done with C is still evolving, as Fil-C and other related projects shows.

For that matter, there are a number of compiled memory-safe and safer languages: Dlang, Vlang, Golang, etc... who could be discussed and are equally viable choices. And if we are talking about something that needs to be outright safety-critical, Ada and SPARK should definitely be in the debate.

However, all of that doesn't override the right of projects to decide on what language they believe is best for them or what strategies concerning safety that they wish to pursue.


Pushed != interested in/talked about. People really like to mash together a bunch of random individuals into a single actor/agenda.

Golang is not playing in the same niche as C/C++/Rust/Zig, but we have had countless memory safe languages that are indeed a good fit for many uses where C was previously used.


Depends on the point of view, for Reversec, it does.

https://reversec.com/usb-armory

> In addition to native support for standard operating environments, such as Linux distributions, the USB armory is directly supported by TamaGo, an Reversec Foundry developed framework that provides execution of unencumbered Go applications on bare metal ARM® System-on-Chip (SoC) processors.


> Golang, etc... who could be discussed and are equally viable choices.

Golang is not 100% zero cost close to metal abstraction. You could add Java and .NET too, but they are not replacement for C obviously.


> The US government recently called on everyone to stop using them and move to memory-safe languages.

The US government also _really_ (no sarcasm) cares about safety-critical code that can be formally verified, depending on program requirements. DO-178, LOR1, el. al.

Developing those toolchains costs tens of millions, getting them certified costs tens of millions, and then buying those products to use costs 500k-1.5m a pop.

Those toolchains do not exist for rust. I am only aware of these toolchains existing for C, and old C at that.

Hell, rust doesn't even have a formal spec, which is a bit of a roadblock.


The DOD also made the Waterfall method THE standard software development process.

> The DOD also made the Waterfall method THE standard software development process.

I'm sure they also made a few bad decisions too :-P


You mean, "DOW"

Department of Waterfall?

Department of War.

That is not the official name, and it is highly unlikely that it ever will be in the future.

It's worth pointing out the Department of Defense was named the Department of War for over 150 years, up until 1947.

https://en.wikipedia.org/wiki/United_States_Department_of_Wa...


True, but it required congressional approval to change the name then, and it would now as well.

This congress is not likely to approve it. And the next congress, even less so.

That said, "ever" is probably too strong. There's a window wherein the chaos which is currently being actively created by the US will develop to an extent that compels the US (or is sold to US voters as a necessary step) to adopt a foreign policy where it would be the more appropriate title. And if the adults can't manage that with charismatic leadership in the next election cycle or two, we could be right back here again, with quasi-legitimate geopolitical justification for the sort of big-stick wagging we see today.

I honestly think this is the goal, and I'm not sure the American people are up to the challenge of preventing it.


In the UK, War Office --> Ministry of Defence, in the 60s I think.

No. I don't.


> While Rust isn’t “certified” out of the box, it provides attributes that facilitate certification. By design, Rust restricts certain low-level operations and enforces strict memory safety rules, effectively shifting much of the error-checking and verification into compile-time. This means that issues that might otherwise be found by multiple external tools in C/C++ are caught early during the Rust build process.

I think your link agrees with me, actually.


https://ferrocene.dev/

DO-178C isn’t there yet, but I believe I heard that it’s coming. In general, Ferrous Systems works with customer demand, which has been more automotive to start.


I believe is may come, that would be really neat.

Actually having it happen, someone is going to be out 10-30 million bucks. And again for each new compiler version.


Qualifying Ferrocene was way, way, way less expensive than that, and they've already had multiple versions of Rust qualified. The incremental qualifications are even easier and cheaper than the initial one is.

26262 is a lot less expensive than DO-178.

I'd believe it, but from talking about this with the Ferrocene folks, there's just structural issues why it was much easier to qualify rustc than it has been to qualify C compilers. This is how they're able to offer the product at a significantly lower price point, and how they've been able to fairly regularly re-qualify new versions quickly.

It is certainly non-trivial.


> With developments such as the Ferrocene-qualified compiler, Rust can now meet all the analysis requirements under DO-178C, one of the most stringent safety-critical standards worldwide.

“Can meet” vs “has met” is the entire difference.

Clearly C “can meet” and “has met” DO-178. So, I posit that more languages than C “can meet” this standard.

Proving it is the very hard, very expensive part.

Oh, and whatever version of the rust compiler that gets certified will be locked down as the only certified toolchain. No more compiler updates every 6 weeks. Unless you go though the whole process again.


Ferrocene has qualified Rust 1.68.2, 1.76.0, 1.79.0, 1.81.0, 1.83.0, 1.86.0, 1.87.0, 1.89.0, with 1.91.0 in the upcoming release.

It's not every six weeks, but it's far faster than once every three years.


Now imagine if every CVE was actually fixed.

How would the three letter agencies then spy on people and deliver their payloads on various target devices?

The governments around the world really need the security holes to exist despite what they say.


Every serious project should be able to justify its choice of tools. "Because I feel like it" is fine for hobby projects or open-source passion projects, but production-worthy software should have reasoning behind its choices. That reasoning can be something like "it's most effective for us because we know it better" but it should be a deliberate, measured choice.

SQLite is an example of an extremely high quality software project. That encompasses not only the quality of the software itself, but the project management around it. That includes explaining various design choices, and this document explaining the choice of language is just one of many such explanations.


Indeed, indeed.

I mostly agree, but the OOP is most definitely not in vogue on HN for the past decade at least, arguably far longer than that (think Rust pre-1.0, Go 1.0 times).

Absolutely. Our thought leaders have been pushing functional programming for a long time now.

Is it possible to have an OOP language which is also functional? Or is it impossible without imperative paradigms?

There's some muddiness in the terminology here -- OOP is really a design style, and "OOP languages" are generally imperative languages that have sematics that encourage OOP design. It is very possible, even encouraged, to represent state as "Objects" in many functional languages; it's just not really enforced by the language itself.

A good example of this are the object systems in Scheme and Common Lisp (which are less strictly Functional (note the capital F in that word) then something like Haskell).


I asked mainly because of the terminology. I read the primer of how to code OOP in plain C about a decade ago, so I knew that the paradigm definitely can be applied to “non OOP languages”, but I wasn’t sure whether the term “functional programming” allows this or not for some obscure academic reasons. How I coded when I encountered Haskell the first time, I would say it’s definitely possible, but I think, there are some features in Haskell which can be used to break pure functional programming, and if those are not considered FP, then who knows. But I used Haskell the last time a few years ago, so my memory is definitely not clear.

Gilad Bracha talks about how they're not mutually exclusive concepts, and I mostly agree (OOP can have tailcall recursion and first order functions for example). But, the philosophy seems very different: functional programming is "standing above" the data, where you have visibility at all times, and do transformations on the data. OOP is much more about encapsulation, where you "send a message" to an object, and it does its own thing. So you could totally write OOP code where you provide some core data structures that you run operations on, but in practice encapsulation encourages hiding internal data.

Though on further thought, may be this isn't FP vs OOP, because C has a similar approach of "standing above", and C is the hallmark imperative language.


Smalltalk, the original OOP lang, is "both", at least if you're not one of those people who thinks FP c'est impossible if it's not ML or haskell

Scala has been that for decades. They are not opposing paradigms. (In fact, mutability has a few edge cases that doesn't play nicely with OOP to begin with)

I mean Scala kind of does both (and then some). I'm not sure I would call it an OOP language, but you can sure write the same gross Java enterprise bloatware in Scala too if you want.

> Every project and programmer shouldn't feel they have to justify their choice not to use Rust (or Zig)

You won't find easily Zig programmers that want you to use Zig at all costs, or that believe it's a moral imperative that you do. It's just antithetical to the whole concept of Zig.

The worst that can happen is that Zig programmers want C projects to have a build.zig so they can cross-compile the project trivially, since that's usually not a thing C/C++ build scripts tend to offer. And even then, we have https://github.com/allyourcodebase/ so that Zig users can get their build.zig scripts without annoying project maintainers.


Not always, but sometimes, new things are just better.

One example is null-- a billion dollar mistake as Tony Hoare called it. A Maybe type with exhaustive pattern matching is so dramatically better, it can be worth switching just for that feature alone.


This "new" thing could have grandkids now :D

ML is not some new development, it just took this long to get some of its ideas mainstream.


I get asked this all the time regarding TidesDB. Why didn’t you choose Rust? Well.

Yeah this super common. Great comment.


We (especially management) are trained to always want shiny new thing. Also it's an easy line of dialogue.

Funny how managers get blamed for both wanting new things and for not wanting new things. In the Java 6 days every dev wanted to upgrade to 7 and later 8… but the meme was that their manager wouldn’t ever let them.

> Every project and programmer shouldn't feel they have to justify their choice not to use Rust

Maybe writing about it was taken as an opportunity to clarify their own thinking about the topic?


OOP is pretty much has-been.

Value semantics is the hot thing now I'd say.


That is why all major operating systems GUIs don't use it, yep.

> a bit less in recent years, to use OOP.

That’s an understatement.


Rust programmers have this "holier than you" attitude that is so toxic. It's essentially wokeism for programming. No wonder it originates from San Francisco, from all places.

The language itself features interesting ideas, many of them borrowed (pun intended) from Haskell, so not that new after all. But the community behavior proved consistently abysmal. A real put off.


Yeah, results are what matters. SQLite's process seems to produce solid bug-free results.

My only complaint would be that there's many SQL features I want to use which aren't supported. Surely some part of that is deliberately restricted scope, but some might also be dev velocity issues.

DuckDB (C++) can be used like an SQLite with more SQL features, but it's also a lot buggier. Segfaults from your database are not what you want.

So still hoping for some alternative to come along (or maybe I'll need to write it myself)


What isn't supported? After window functions were added, what else is missing?

The ones that come to mind immediately are more ALTER statements (especially adding non-virtual columns), DATE, TIME, and TIMESTAMP types and their associated functions, and ARRAY types. Although I don't wish to disparage SQLite, they do support a lot of features. Just that I constantly run into the ones they don't, with my use cases.

And beyond standard SQL, stuff like hashmap/bloom filter/vector indices, nearest neighbor search, etc...


>justify their choice not to use Rust (or Zig)

It's disingenuous to lump them together. It is the former that does the whole toxic, pushy advocacy routine.


OOP was always debatable, but with the rise of AI agents I'd go so far as to say it's objectively bad now. AI is much better at reasoning about pure function grammars than imperative classes with inheritance; abstractions that aren't airtight are bug factories.

It definitely is possible, because languages like Vlang (for example), are able to use '.' instead of '::'. Always saw this as language creator preference, versus any inescapable technical reason.

Appears to be a tactic, to cause confusion with releases coming from other programming languages, because otherwise this doesn't make too much sense for an actual user to do.

For example, V (Vlang) went to 0.5.0 (December 31st) and C3 is 0.7.8 (December 6th) in 2025 (last month).

[1] https://github.com/vlang/v/releases/tag/0.5

[2] https://github.com/c3lang/c3c/releases/tag/v0.7.8


The problem is, if you combine ability with possibility, then it is bound to happen.


Mojo, Carbon, Bosque, Val, Gleam, Fika, and Vlang.


Think it's more along the lines of Jon having the ability to create a language, and upon being dissatisfied with what he was using, decided to make his own.

GitHub is littered with pet languages that people have made, and doubt their reasons are simply about being "eye-wateringly arrogant".

Moving past that, people paying attention or wanting to use the language, usually means it appeals to them. Jai has fans and supporters, because they are able to look past or are not concerned about his personality quirks, but are focused on the quality and usefulness of the software produced.


Wasn't Borgo[1] suppose to be the new child of Rust and Go too? Rue, is like the new replacement of Borgo, which hadn't been out for long.

On top of that, the original child of Rust and Go, was called Vlang[2].

[1]: https://borgo-lang.github.io/

[2]: https://www.youtube.com/watch?v=puy77WfM1Tg (Is V Lang Better Than Go And Rust? Let's Find Out)


I don’t believe I’m familiar with Borgo. If I did see it before, I’d forgotten about it.

I am familiar with V.


This "negative reputation" on here, looks to be something that was first artificially generated by competitors and then allowed to boil, along with those using the bubbling to promote themselves and their sites.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: