Hacker News new | past | comments | ask | show | jobs | submit | nicce's comments login

For me it is like 0.5s. Interesting.

’Extern c’ still uses Rust. You want to skip Rust and call C from other languages directly.

Rust doesn't have a runtime so it looks just like C in compiled form. c-bindgen even spits out a c header. I’m not sure what skipping C practically means even if you can argue there’s a philosophical skip happening.

You can't apply all of the hacks C programmers apply, like calling private methods, because Rust's internal ABI is different in some annoying spots.

Of course you shouldn't do that, but it's a problem rust-to-c conversion would solve.

Another reason I could think of is the desire to take something licensed in a way you don't like, written in Rust, for which you'd like to call into the private API in your production code, but don't want the legal obligations that come with modifying the source code to expose the methods the normal way.

I don't think either use case is worth the trouble, but there are theoretically some use cases where this makes sense.

It's also something I might expect someone who doesn't know much about Rust or FFIs outside of their own language might do. Not every language supports exporting methods to the C FFI, and if you're coming from one of those and looking to integrate Rust into your C you might think that translation is the only way to do it.

Most likely, it's a way rust haters can use rust code without feeling like the "other side" has won.


Not GP, but what is the point of touching Rust at all then?

I'd like to write Rust, receive its safety benefits (esp borrow checker), compiler to equivalent C, and then use C's tooling on the result. Why use C's tooling?

In verification, C has piles of static analyzers, dynamic analyzers, test generators (eg KLEE), code generators (eg for parsing), a prover (Frama-C), and a certified compiler. If using a subset of these, C code can be made more secure than Rust code with more effort.

There's also many tools for debugging and maintenance made for C. I can also obfuscate by swapping out processor ISA's because C supports all of them. On the business end, they may be cheaper with lower watts.

I also have more skilled people I can hire or contract to do any of the above. One source estimated 7 million C/C++ developers worldwide. There's also a ton of books, online articles, and example code for anything we need. Rust is very strong in that last area for a new language but C/C++ will maintain an advantage, esp for low-level programming.

These are the reasons I'd use Rust if I wanted C or C++ for deployment. Likewise, I wish there was still a top-notch C++ to C compiler to get the same benefits I described with C's tooling.


Rust is much easier to learn due to C/C++ books all being paid (even cmake wants you to buy their book) whereas Rust documentation is free. I bet more and more people are choosing to learn Rust over C/C++ for this reason, and the number of C/C++ devs will be decreasing.

what a weird take to me... C has DECADES of high quality teaching material in form of books, university courses, plenty of which is freely available with a bit of searching.

And, if we discount the fact that "buying" a book is such a big hurdle, even more high quality academic text books and papers to boost; anything from embedded on the weirdest platforms, basic parsing, writing compilers, language design, high performance computing, teaching of algorithms, data structures, distributed systems, whatever!

edit: I even forgot to name operating system design plus game programming ; and of course accompanying libraries, compilers & build systems to cover all of those areas and use cases! edit2: networking, signal processing, automotive, learning about industry protocols and devices of any sort... if you explore computer science using C as your main language you are in the biggest candy store in the world with regards to what it is you want to learn about or do or implement...


> C has DECADES of high quality teaching material in form of books, university courses, plenty of which is freely available with a bit of searching.

Which means all that high quality teaching material is DECADES old. Rust development is centralised and therefore the docs are always up-to-date, unlike C/C++ which is a big big mess.


Being decades old does not make it out of date. Until a few years ago, the Linux kernel was written using C89. While it has switched to C11, the changes are fairly small such that a book on C89 is still useful. Many projects still write code against older C versions, and the C compiler supports specifying older C versions.

This is very different than Rust where every new version is abandonware after 6 weeks and the compiler does not let you specify that your code is from a specific version.


> This is very different than Rust where every new version is abandonware after 6 weeks and the compiler does not let you specify that your code is from a specific version.

Do you have any specific evidence? Rust ecosystem is known for libraries that sit on crates.io for years with no updates but they are still perfectly usable (backward-compatible) and popular. Projects usually specify their MSRV (minimum supported Rust version) in the README.



Are you asking for LTS releases? https://ferrocene.dev/en/

I was not asking for that. I was answering your question. You asked for evidence of rust releases being abandonware. I gave it to you. Someone else trying to ameliorate Rust releases does not change this reality.

Well “abandonware” is a strange way to call that because nothing is actually abandoned.

Use language features not considered “stable rust” that are later discarded and you will learn it is abandonware very quickly. In any case, you asked for proof and now have it. You should be saying thank you instead of trying to argue.

I mean, thank you, but calling Rust abandonware just because it uses a rolling release model is misleading IMO. Also there's nothing wrong with unstable features being discarded, they're unstable.

The description is accurate compared to what you get from other more established systems languages.

> Being decades old does not make it out of date.

Right, the docs never get out of date if the thing they document never changes. Can you say the same about C++ though? I’ve heard they release new versions every now and then. My robotics teacher didn’t know ‘auto’ is a thing for example.


Both C and C++ release new versions. The compilers continue to support the old versions and people continue using the old versions (less so in the case of C++). Rust’s compiler drops the old version every time it has a new release.

There is no `-std=1.85` in rust 1.86. You do get `-std=c++98` in both g++ and clang++. A book on C or C++ is still useful even decades later since the version of C or C++ described does not become abandonware at some well defined point after release, unlike Rust releases.


I'm confused. Rust uses semantic versioning:

Given a version number MAJOR.MINOR.PATCH, increment the:

    MAJOR version when you make incompatible API changes
    MINOR version when you add functionality in a backward compatible manner
    PATCH version when you make backward compatible bug fixes
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

What kind of versioning scheme does C/C++ use?


C and C++ are two different languages. They are versioned by years. Rust technically does not have versions. The rust tools have versions. Basically all versions of C are binary compatible with each other. I suggest you actually learn and use C rather than asking questions since you are never going to ask the right things to understand how it works without having firsthand experience.

> C and C++ are two different languages. They are versioned by years.

That sounds like Rust editions.


Only superficially. You cannot specify a past version of Rust where features existed that have since been removed by doing that. You also do not have a situation where two different incompatible languages are being accepted by the same compiler and as long as you specify which one is used, the compiler will properly compile code for it. For example, C will not accept headers with C++ exclusive features like namespaces and C++ will not accept headers with C exclusive features like variably modified types.

The only reason you see people grouping the two languages together is due to history. They are both derivatives of an ancient prestandard form of C called K&R C. They both have different standards committees who had different ideas about how to move forward from K&R C. The result is that C compilers were extended to support both, and that extension continues to this day despite the divergence between the two languages. The C standards committee accepted some of the changes the C++ committee made to K&R C, although the C++ standards committee does not reciprocate by accepting changes made by the C standards committee. This is making them increasingly different languages.

Try to spend time learning how other things work instead of posting replies that attempt to reinterpret everything people tell you through a Rust lens whenever someone is kind enough to answer your questions like you are doing here. It is like asking people how Chinese works and then telling them “English does it this way”. The “nothing matters but <insert language here>” mentality that causes that is going to annoy a large number of people from whom you would otherwise be able to learn.


Auto as it is now has been in C++ since C++11, thats more than a decade ago...

If your argument was C then sure thats a C23 feature (well the type inference type of auto ) and is reasonably new.

This is much more a reflection on youe professor than the language. C++11 was a fundamental change to the language, anyone teaching or using C++ in 2025 should have an understanding of how to to program well in a 14 year old version of said language...


> Auto as it is now has been in C++ since C++11, thats more than a decade ago...

> anyone teaching or using C++ in 2025 should have an understanding of how to to program well in a 14 year old version of said language...

If the current year is 2025 then 14 years ago is 2011 which is not that long ago.

> If your argument was C then sure thats a C23 feature (well the type inference type of auto ) and is reasonably new.

Grandparent comment is arguing that Linux was written in C89 until a few days ago so decades old books on C aren't actually outdated.


Decades ols books in C most certainly is even useful in modern C++23 because you need to interact with other libraries written in C89.

When a lot of modern CS concepts wwre first discovered and studied in the 70s, there's no point arguing that old books are useless. Honestly there may be sections of old books that are useless but in the whole they are still useful.


We're talking about learning C/C++ from scratch which makes no sense to do by using a decades old book because it wouldn't teach you any modern features. Also we're not talking about computer science.

You do not need to know about modern features to write code in C. This is part of computer science.

> You do not need to know about modern features to write code in C.

Then what’s the point of adding any new features?


Some people want to use them, they are useful in some contexts and they often already exist in some form elsewhere, but the majority of people often do not need them.

That said, when you learn a foreign language, you do not learn every word in the dictionary and every grammatical structure. The same is true for programming. You just don't need to have a more up to date book than one on C89 to begin learning C.


A long time ago, Victor Yodaiken told me the best way to learn C was the old, K&R book. The explanations, examples, and even pitfalls were all explained well. My code worked without much fuss. That's probably because C hasn't changed much.

I ended up switching to Python for job requirements after getting through half the book. If I re-learn C, I'll go back to it. If it were C++, that would be a totally, different story since they kept piling features on over time.


Oh my... Are you serious? I'm almost triggered by this. A book about algorithms or data structures from 20 years ago has nothing more to teach? 3D game engine design from 20 years ago has nothing more to teach? No point in looking at the source code of Quake, reading k&r, and Knuth's TAOCP Volume 1 was published in 1968 so it's obviously irrelevant garbage!

I could spurr into an essay of what kind of lack of understanding you just portrayed about the world, but I won't.... I won't...


We’re talking about C/C++, not algorithms or data structures.

How you implement algorithms and data structures in C++/rust is semantics at best. The imperative shell of those languages are identical semantically right down to the memory model.

Right, that's why a 20 year old book on algorithms and data structures is not necessarily outdated, but a 20 year old book on C/C++ most certainly is.

My copy of The C++ Programming Language for C++98 is still useful today, as is copy of The C Programming Language for C89. The idea that these books are no longer useful is absurd. Modern compilers still support those versions and the new versions are largely extensions of the old (although C++11 changed some standard library definitions). The only way you could think this is if you have zero knowledge of these languages.

> The only way you could think this is if you have zero knowledge of these languages.

Exactly. For context see my original comment above about C/C++ books being paid.


Are you unable to use search engines:

https://www.learn-c.org/

There are so many free learning resources for these languages that it is ridiculous to say that you need books to learn them. The books are great, but non-essential. If you insist on reading books, there is an ancient invention called a library that you can use for free.


What C standard does that website describe?

At a glance, the code is compatible with all C standards ever published. You are too fixated on learning the latest. The latest is largely just a superset of the earlier standards and the new features are often obscure things you are unlikely to need or want to use.

The main exceptions that you actually would want to use would be the ability to declare variables anywhere in a block and use single line // comments from C++ in C99. Most C programmers do not even know about most of the things added in newer C standards beyond those and perhaps a handful of others.


I did more research and found https://isocpp.org/get-started which appears to be the authority for C++. It states that I will need a textbook for learning C++ and includes a link to Bjarne Stroustrup "Tour of C++" Amazon page (not buying it lol). For C the situation is more complicated because there appears to be multiple organizations making standards for it, and you have to pay "CHF 221.00" to even see the standard. It kind of reminds me of the web where there is also multiple consortiums making standards and browsers hopefully implement them (except the web standards are free). In conclusion I much prefer Rust where you can just read the docs without bullshit.

Almost nobody using C (or C++ for that matter) has read the standard. The standard exists for compiler developers. If you want to read the standard, you can get copies of the drafts for free. It is an open secret that the final draft is basically verbatim with the standard that costs money. However, it is very rare that a C programmer needs to read the standard.

As for C++, there is nothing at that link that says you need a textbook to learn C++ (and the idea that you need one is ludicrous). The textbooks are suggested resources. There are plenty of free resources available online that are just as good for learning C++.

You would be better off learning C before learning C++. C++ is a huge language and its common history with C means that if you do not understand C, you are likely going to be lost with C++. If you insist on learning C++ first, here is the first search result from DuckDuckGo when I search for "learn C++":

https://www.learncpp.com/

You will likely find many more.

For what it is worth, when I was young and wanted to learn C++, I had someone else tell me to learn C first. I had not intended to follow his advice, but I decided to learn C++ by taking a university class on the subject and the CS department had wisely made learning C a prerequisite for learning C++. I later learned that they had been right to do that.

After learning C++, I went through a phase where I thought C++ was the best thing ever (much like how you treat Rust). I have since changed my mind. C is far better than C++ (less is more). I am immensely proud of the C code that I have written during my life while I profoundly regret most of the C++ code that I have written. A particular startup company that I helped get off the ground after college runs their infrastructure on top of a daemon that I wrote in C++. Development of that daemon had been a disaster, with C++ features making it much harder to develop than it actually needed to be. This had been compounded by my "use all of the features" mentality, when in reality, what the software needed was a subset of features and using more language features just because I could was a mistake.

I had only been with the startup for a short time, but rejoined them as a consultant a few years ago. When I did, I saw that some fairly fundamental bugs in how operating system features were used from early development had gone unresolved for years. So much of development had been spent fighting the compiler to use various exotic language features correctly that actual bugs that were not the language's fault had gone unnoticed.

My successor had noticed that there were bugs when things had gone wrong, but put band-aids in place instead of properly fixing the bugs. For example, he used a cron job to restart the daemon at midnight instead of adding a missing `freeaddrinfo()` call and calling `accept()` until EAGAIN is received before blocking in `sigwaitinfo()`. Apparently, ~3000 lines of C++ code, using nearly every feature my younger self had known C++ to have, were too complicated for others to debug.

One of the first things I did when I returned was write a few dozen patches fixing the issues (both real ones and cosmetic ones like compiler warnings). As far as we know, the daemon is now bug free. However, I deeply regret not writing it in C in the first place. Had I written it in C, I would have spent less time fighting with the language and more time identifying mistakes I made in how to do UNIX programming. Others would have been been more likely to understand it in order to do proper fixes for bugs that my younger self had missed too.


> As for C++, there is nothing at that link that says you need a textbook to learn C++.

Sorry, it says that in their FAQ[0]. It also says "Should I learn C before I learn C++?" "Don’t bother." and proceeds to advertise a Stroustrup book[1].

[0]: https://isocpp.org/wiki/faq/how-to-learn-cpp#start-learning

[1]: https://isocpp.org/wiki/faq/how-to-learn-cpp#learning-c-not-...

> If you insist on learning C++ first, here is the first search result from DuckDuckGo when I search for "learn C++":

I don't insist on learning C++ and I even agree with you that C is better. But I have a problem with learning from non-authoritative sources, especially random websites and YouTube tutorials. I like to learn from official documentation. For C there appears to be no official documentation, and my intution tells me that, as nickpsecurity mentioned, the best way is to read the K&R book. But that brings us back to my original point that you have to buy a book.

> was the one true way (like you seem to have been with Rust)

I don't think there exists any one true way. It depends on what you do. For example I like Rust but I never really use it. I pretty much only use TypeScript.

> was the best thing ever (much like how you treat Rust)

I would actually prefer Zig over Rust but the former lacks a mature ecosystem.

> For example, they used a cron job to restart the daemon at midnight instead of adding a missing `freeaddrinfo()` call and calling `accept()` until EAGAIN is received before blocking in `sigwaitinfo()`.

This sounds like a kind of bug that would never happen in Rust because a library would handle that for you. You should be able to just use a networking library in C as well but for some reason C/C++ developers like to go as far as even implementing HTTP themselves.

> After learning C++...

Thanks for sharing your story. It's wholesome and I enjoyed reading.


> Sorry, it says that in their FAQ[0]. It also says "Should I learn C before I learn C++?" "Don’t bother." and proceeds to advertise a Stroustrup book[1].

They also would say "Don't bother" about using any other language. If you listen to them, you would never touch Rust or anything else.

> But I have a problem with learning from non-authoritative sources, especially random websites and YouTube tutorials. I like to learn from official documentation. For C there appears to be no official documentation, and my intution tells me that, as nickpsecurity mentioned, the best way is to read the K&R book. But that brings us back to my original point that you have to buy a book.

The K&R book is a great resource, although I learned C by taking a class where the textbook was "A Book On C". I later read the K&R book, although I found "A Book on C" to be quite good. My vague recollection (without pulling out my copies to review them) is that A Book On C was more instructional while the K&R book was more of a technical reference. If you do a search for "The C programming language", you might find a PDF of it on a famous archival website. Note that the K&R book refers to "The C programming language" by Kerninghan and Ritchie.

Relying on "authoritative" sources by only learning from the language authors is limiting, since they are not going to tell you the problems that the language has that everyone else who has used the language has encountered. It is better to learn programming languages from the community, who will give you a range of opinions and avoid presenting a distorted view of things.

There are different kinds of authoritative sources. The language authors are one, compiler authors are another (although this group does not teach), engineers who actually have used the language to develop production software (such as myself) would be a third and educational institutions would be a fourth. If you are curious about my background, I am ths ryao listed here:

https://github.com/openzfs/zfs/graphs/contributors

You could go to edx.org and audit courses from world renowned institutions for free. I will do you a favor by looking through what they have and making some recommendations. For C, there really is only 1 option on edX, which is from Dartmouth. Dartmouth is a world renowned university, so it should be an excellent teacher as far as learning C is concerned. They appear to have broken a two semester sequence into 7 online courses (lucky you, I only got 1 semester at my university; there was another class on advanced UNIX programming in C, but they did not offer it the entire time I was in college). Here is what you want to take to learn C:

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/linux/dartmouth-college-linux-basi...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

There is technically a certificate you can get for completing all of this if you pay, but if you just want to learn without getting anything to show for it, you can audit the courses for free.

As for C++, there are two main options on edX. One is IBM and the other is Codio. IBM is a well known titan of industry, although I had no idea that they had an education arm. On the other hand, I have never heard of Codio. Here is the IBM sequence (note that the ++ part of C++ is omitted from the URLs):

https://www.edx.org/learn/c-programming/ibm-fundamentals-of-...

https://www.edx.org/learn/object-oriented-programming/ibm-ob...

https://www.edx.org/learn/data-structures/ibm-data-structure...

There actually are two more options on edX for C++, which are courses by Peking University and ProjectUniversity. Peking University is a world class university in China, but they only offer 1 course on edx that is 4 weeks long, so I doubt you would learn very much from it. On the other hand, I have never heard of ProjectUniversity, and their sole course on C++ is only 8 weeks long, which is not very long either. The 3 IBM courses together are 5 months long, which is what you really want.

> I pretty much only use TypeScript.

Learn C, POSIX shell scripting (or bash) and 1 functional programming language (Haskell is a popular choice). You will probably never use the functional programming language, but knowing about functional programming concepts will make you a better programmer.

> This sounds like a kind of bug that would never happen in Rust because a library would handle that for you. You should be able to just use a networking library in C as well but for some reason C/C++ developers like to go as far as even implementing HTTP themselves.

First, I was using a networking library. The C standard library on POSIX platforms is a networking library thanks to its inclusion of the Berkeley sockets API. Second, mistakes are easy to criticize in hindsight with "just use a library", but in reality, even if you use a library, you could still make a mistake, just as I did here. This code also did much more than what my description of the bug suggested. The reason for using asynchronous I/O is to be able to respond to events other than just network I/O, such as SIGUSR1. Had I not been doing that, it would not have had that bug, but it needed to respond to other things than just I/O on a socket.

I described the general idea to Grok and it produced a beautiful implementation of this in Rust using the tokio "crate". The result had the same bug that the C++ code had, because it made the understandable assumption my younger self made that 1 SIGIO = 1 connection, but that is wrong. If two connection attempts are made simultaneously from the perspective of the software, you only get 1 SIGIO. Thus, you need to call accept() repeatedly to drain the backlog before returning to listening for signals.

This logical error is not something even a wrapper library would prevent, although a wrapper library might have prevented the memory leak, but what library would I have used? Any library that wraps this would be a very thin wrapper and the use of an additional dependency that might not be usable on then future systems is a problem in itself. Qt has had two major version changes since I wrote this code. If I had used Qt 4's network library, this could have had to be rewritten twice in order to continue running on future systems. This code has been deployed on multiple systems since 2011 and it has never once needed a rewrite to work on a new system.

Finally, it is far more natural for C developers and C++ developers to use a binary format over network sockets (like I did) than HTTP. Libcurl is available when people need to use HTTP (and a wide variety of other protocols). Interestingly, an early version of my code had used libcurl for sending emails, but it was removed by my successor in favor of telling a PHP script to send the emails over a network socket (using a binary format).


> Thus, you need to call accept() repeatedly to drain the backlog before returning to listening for signals.

It's not just accept. If your socket is non-blocking the same applies to read, write, and everything else. You keep syscalling until it returns EAGAIN.

> I described the general idea to Grok and it produced a beautiful implementation of this in Rust using the tokio "crate". The result had the same bug that the C++ code had, because it made the understandable assumption my younger self made that 1 SIGIO = 1 connection, but that is wrong.

I don't know what your general idea was but tokio uses epoll under the hood (correctly), so what you are describing could only have happened if you specifically instructed grok to use SIGIO.

> Finally, it is far more natural for C developers and C++ developers to use a binary format over network sockets (like I did) than HTTP.

Designing a custom protocol is way more work than just using HTTP. <insert reasons why http + json is so popular (everyone is familiar with it blah blah blah)>.


> It's not just accept. If your socket is non-blocking the same applies to read, write, and everything else. You keep syscalling until it returns EAGAIN.

You do not call read/write on a socket that is listening for connections.

> I don't know what your general idea was but tokio uses epoll under the hood (correctly), so what you are describing could only have happened if you specifically instructed grok to use SIGIO.

That is correct. There is no other way to handle SIGUSR1 in a sane way if you are not using SIGIO. At least, there was no other way until signalfd was invented, but that is not cross platform. epoll isn't either.

> Designing a custom protocol is way more work than just using HTTP. <insert reasons why http + json is so popular (everyone is familiar with it blah blah blah)>.

You are wrong about that. The code is just sending packed structures back and forth. HTTP would overcomplicate this, since you would need to implement code to go from binary to ASCII and ASCII to binary on both ends, while just sending the packed structures avoids that entirely. The only special handling this needs is to have functions that translate the structures from host byte order into network byte order and back, to ensure that endianness is not an issue should there ever be an opposite endian machine at one end of the connection, but those were trivial to write.

Do yourself a favor and stop responding. You have no idea what you are saying and it is immensely evident to anyone who has a clue about software engineering.


> You could go to edx.org and audit courses...

This is great advice, thanks!


> edit2: networking, signal processing, automotive, learning about industry protocols and devices of any sort...

I admit there is many great products that are written in C that aren’t going anywhere any time soon, notably SQLite, but there is no reason to write new software in C or C++.


I wrote a new daemon in C the other day. Plenty of people will continue to write software in C. Some will even write it in C++. There is nothing you can write that will convince the world to stop writing software in these languages. We went through this once with Java. We are going through it with Rust. We will go through it with Zig or whatever it is that people say everyone should adopt next instead of Rust and all else that came before it.

There's tools that can prove the absence of runtime errors in industrially-useful subsets of C. Frama-C, RV-Match, and Meta's Infer come to mind. That code is then usable in about anything because so much of it is written in or can call C. Until Rust has that combo, there's still a reason to use C.

Personally, I'd use both Rust and C with equivalent code. The Rust types and borrow checker give some extra assurance that C might not have. The C tooling gives extra assurance Rust doesn't have. Win, win. If I want, I can also do a certified compile or cross-compile of the Rust-equivalent, C code.


Astree claims to be able to prove the absence of runtime errors in both C and C++, without requiring the use of subsets:

https://www.absint.com/astree/index.htm

By the way, C has a formally verified C compiler:

https://compcert.org/compcert-C.html


Yeah, CompCert will certify a binary for those willing to buy it or use a GPL license. Empirical testing showed it had no bugs in its middle end, unlike other compiler.

On Astree, I couldn't believe it supported all language constructs. I found this on your link:

"and is subject to the same restrictions as Astrée for C.

The high-level abstraction features and template library of C++ facilitate the design of very complex and dynamic software. Extensive use of these features may violate the established principles of safety-critical embedded software development and lead to unsatis­fac­tory analysis times and results. The Astrée manual gives recommendations on the use of C++ features to ensure that the code can be well analyzed. For less constrained (less critical) C++ code, we recommend using the standalone RuleChecker."

So, it still does require a language subset that reduces complexity to benefit from the full analysis. They have greatly expanded what errors they catch since I first read about them. So, thanks for the link.


I researched and discovered kani https://github.com/model-checking/kani, it's pretty cool.

```rust

#[kani::proof]

fn main() {

    let mut array: [i32; 10] = kani::any();

    array.sort_unstable();

    let index: usize = kani::any_where(|i| *i > 0 && *i < array.len());

    assert!(array[index] >= array[index - 1]);
} ```

I do, and will, the industry does and will, for at least a few more decades. And I even enjoy doing so (with C; C++ is more forced upon me, but that'll be the case for some time to come)

That’s what I’m saying. By a few decades you and most of those alleged 7 million C/C++ developers will retire and there won’t be anyone to replace them because everyone will be using Rust or Zig or Go.

Very strong statement, one I don't really believe

That’s what happened to COBOL, right?

In COBOL’s case, nobody really wanted to write software in it in the first place. People used assembly or FORTRAN at the time. MBAs wanted a language corporate secretaries could use, so they made COBOL. It failed at its main purpose, but since IBM pushed it for business processing, people were coerced into writing software with it since if they did not use it, they would not have a job. As the industry developed, they stopped finding new people that they could coerce into using it and the old people started retiring/dying. Unlike COBOL adoption, C and C++ adoption occurred because software engineers wanted to use them due to merit.

> anything from embedded on the weirdest platforms, basic parsing, writing compilers, language design, high performance computing, teaching of algorithms, data structures, distributed systems, whatever

All that is language-agnostic and doesn’t necessarily have anything to do with C.


Yes, but there is material covering all of those aspects with implementations in C plus libraries plus ecosystem support! From teaching material to real world reference implementations to look at and modify and learn from.

And C maps so directly to so many concepts; it's easy to pick up any of those topics with C; and it being so loose makes it even perfect to explore many of those areas to begin with, since very quickly you're not fighting the language to let you do things.


People may learn from material that uses C for illustration purposes, but that won’t prompt them to write their own C. And don’t even mention ecosystem support in C/C++ where developers are notorious for reimplementing everything on their own because there is no package manager.

"rust import csv-parser" "you've installed 89 packages"

why is my executable 20 mb, not as performant as a 50 kb C file & doesn't build a year later if I try to build with 89 new versions of those packages

obligatory xkcd reference: https://imgs.xkcd.com/comics/dependency_2x.png this is what package managers lead to


Conversely, "why does my hand-written CSV parser only support one of the 423 known variants of CSV, and it isn't the one our customer sent us yesterday?"

You kind of have a point behind dependency hell, but the flip side is that one needn't become an expert on universal CSV parsing just to have a prayer of opening them successfully.


You are greatly exaggerating the numbers, and Rust and its ecosystem are known for being stable. Are you saying everyone should write their own csv parser? Also it’s highly likely that an existing CSV library would be optimized, unlike whatever you wrote ad-hoc, so your performance argument doesn’t hold.

I'm saying package managers automate dependency hell and software is and will be less stable & more bloated as a consequence. And one should know how to write a csv parser and yes, even consider writing one if there are obvious custom restraints and it's an important part of one's project.

(and yes, numbers were exaggerated; I picked something trivial like a csv parser pulling in 89 packages for effect; the underlying principle sadly holds true)


Anyone can read a file, split it by newline and split it by comma, but what if values contain newlines or commas? How do you unescape? What about other edge cases? In Rust an existing library would handle all that, and you would also get structure mapping, type coercion, validation etc for free.

The number of dependencies does not indicate how bloated your app is. Package managers actually reduce bloat because your dependencies can have shared dependencies. The reason C programs may seem lightweight is because their number of dependencies is low, but each dependency is actually super fat, and they tend to link dynamically.

In the context of Rust it is not about "bloat" indeed. The compiler includes only used bits and nothing more. However, there are other problems, like the software supply chain security. More dependencies you have, more projects you have to track. Was there a vulnerability that affects me? Is project unmaintained and some ill factor took it over?

In C this was actually a less problem since you had to copy-paste the shared code into your program and at some level you were manually reviewing it all the time.

Also in Rust people tend to write very small libraries and that increases the number of dependencies. However, many still not follow SemVer et. al and packages tend to be unstable too. On top of additional security issues. They maybe be useful for a short time but in many cases you might need to think the lifetime of your application up to 10 years.


> However, there are other problems, like the software supply chain security.

It's not a problem with Rust specifically though. It's not unique to Rust.

> Also in Rust people tend to write very small libraries and that increases the number of dependencies. However, many still not follow SemVer et. al and packages tend to be unstable too.

Don't use random unpopular crates maintained by unknown people without reviewing the code.


That applies to your distro package manager too.

Probably not. Many people prefer C/C++ to Rust, which has its own fair share of problems.

Two people is many people. The general trend I see is that Rust is exploding in adoption.

The last I checked various stats (GitHub Language stats, TIOBE, etc.), Rust wasn't even in the top 10. I'm sure its adoption is increasing. However, other languages like Go seem to be doing much better. Neither will replace C++ or C anytime soon.

C/C++ will be replaced incrementally and it’s already happening. Cloudflare recently replaced nginx with their own alternative written in Rust for example.

That's nice, but a couple of Rust rewrites are not proof of a general trend.

I've been working with C for over 30 years, both professionally and a hobbyist. I have experimented with Rust but not done anything professionally with it. My gut feel is Rust is too syntactically and conceptually complex to be a practical C replacement. C++ is also has language complexity issues, however it can be adopted piecemeal and applied to most existing C code.


> My gut feel is Rust is too syntactically and conceptually complex to be a practical C replacement.

That would depend on what you use C for. But I sure can imagine people complain that Rust gets in the way of their prototyping while their C code is filled with UB and friends.


> That's nice, but a couple of Rust rewrites are not proof of a general trend.

It’s not just a couple. We’ve seen virtually all JS tooling migrate to Rust, and there is many more things but I can’t remember by name.


It is, but it is still tiny compared to C/C++. And many people also do not like it.

There are two categories of people who don’t like Rust:

1. C/C++ developers who are used to C/C++ and don’t want to learn Rust.

2. Go developers who claim Rust is too difficult and unreadable.

Which one is you?


I think the "don't want to learn" is a very poor argument. I learn stuff every day, but I want want to decide myself what I learn to solve my problems, not because Rust folks think everybody has to learn Rust now. I learned a bit of Rust out of curiosity, but not so much I could do a lot with it. I do not want to learn more, because I think the language has severe flaws and and much less suitable than C for my use cases.

I know a guy who used to love Rust and his cannot stand it. The hype wore off in his case.

Does he prefer fighting libtool, make, cmake, configure, automake, autoconf and autoreconf just to add a couple libraries into his project? When I tried to use C, I wrote 10 lines of code and spent hours trying to make all that shit work. It makes me laugh when people say Rust is complicated.

It really is not that complicated. You just use -llibrary when linking and it links.

Oh I never realized I was supposed to install -dev packages, I thought I had to compile myself.

Whether you need -dev packages (for headers) depends on your operating system. I run Gentoo. All headers are always on the system. Other distributions ship the headers separately in -dev packages to save space and you need to install them. You likely can install all -dev packages for everything on your system so you do not need to get the individual ones.

Autocorrect seems to have made a typo worse here. It was supposed to be now, not his.

I think we will get the same safety benefits of Rust in a version of C relatively soon.

Borrow checker is not the only feature that makes Rust great though.

Yes, it also has many aspects I do not like about it. Let's not pretend everybody shares your enthusiasm for it.

What aspects do you not like about Rust?

Too much complexity, long build times, monomorphization, lack of stability / no standard, poor portability, supply chain issues, no dynamic linking, no proper standard, not enough different implementations, etc. It is a nice language though, but I do not prefer it over C.

> long build times, monomorphization

Monomorphization is what causes long build times, but it brings better performance than dynamic dispatch.

> lack of stability

There was another comment which also never elaborated on how Rust is not stable.

> supply chain issues

Not a language issue, you choose your dependencies.

> no proper standard, not enough different implementations

Is that a practical problem?

> no dynamic linking

There is.


> > no dynamic linking

> There is.

Eh, I'm a Rust fan, and I hate the dynamic linking situation too.

I genuinely cannot see how Rust would be able to scale to something usable for all system applications the way it is now. Is every graphical application supposed to duplicate and statically link the entire set of GNOME/GTK or KDE/Qt libraries it needs? The system would become ginormous.

The only shared library support we have now is either using the C ABI, which would make for a horrible way to use Rust dependencies, or by pinning an exact version of the Rust compiler, which makes developing for the system almost impossible.

Hopefully we'll get something with #[export] [1] and extern "crabi" [2], but until then Rust won't be able to replace many things C and C++ are used for.

[1] https://github.com/rust-lang/rfcs/pull/3435

[2] https://github.com/rust-lang/rfcs/pull/3470


> I genuinely cannot see how Rust would be able to scale to something usable for all system applications the way it is now. Is every graphical application supposed to duplicate and statically link the entire set of GNOME/GTK or KDE/Qt libraries it needs? The system would become ginormous.

You don't have to statically link C libraries.


I am referring to Rust scaling up to where C is now (i.e. try to remove C from the picture).

As in, if there will ever be a Rust equivalent of KDE/GNOME (e.g. maybe COSMIC), it will have similar libraries. They will have to be in Rust, and will have to be statically linked. And they will eventually grow up to be the size of KDE/GNOME (as opposed to the tiny thing COSMIC is now).


If it is any consolation, early C did not have dynamic linking either. It was added afterward. From what I understand, UNIX systems had statically linked binaries in /bin until ELF was introduced and gradually switched to dynamically linked binaries afterward.

If your like Rust, this is fine, but I will stay with C. I find it much better for my purposes.

He said elsewhere that he does not even use Rust. He uses typescript. I am confused why he is bothering to push Rust when he does not even use it.

I would use it if I had the opportunity, that's why.

To generate safer C?

Compiling Rust to C doesn’t simplify interoperability. Either way you’ll be calling C functions. I assume compiling Rust to C is useful if you’re targeting some esoteric platform that Rust compiler doesn’t support.

Payment systems take huge fees. It is always good if they get back to the country and not elsewhere. Digital paying is something fundamental. Like electricity.

Brazilian Pix is free though, at least for the time being. IMO the biggest thing is not the money behind it, but the ability to track individual payments. Even that, I prefer the government to have that information, than some shady owner of a private company

PIX are free for persons. Companies may* pay for pix services. My bank (that is not a good bank) charges a fixed amount of 4 BRL (aprox 1 USD) per transaction (to send PIX. not to receive) PIX in "maquininhas" may cost ~1% to the seller.

* may: banks are allowed to charge.


To give some reference, using stripe you pay 2-3% for credit card payments and PayPal charges you ~5% of the transaction amount. Apple store and Steam take 30%. So honestly 1% sounds like a great deal.

I think comparing Steam and to some extend Apple with payment methods, they are stores and it cost money to store apps and games and for this one I'm not 100% sure, but I read a while ago that they also pay taxes for you in the country you sell, while pure payment processing services are just a proxy to move money from one account to another. You could argue that 30% is high for that, but we aren't discussing it here.

Which is way cheaper than credit/debit card charges from Visa and Mastercard.

And there's no surprise fraud claims.

My wife runs a small retail makeup shop on Shopify, which started before pix and those surprise false fraud claims almost killed the business.

Pix was such a game changer. It is perfect.


> 4 BRL (aprox 1 USD)

I wish. That's off by 50%


> Payment systems take huge fees

That is the monopoly cost. UPI is free for both payee and payer. Whatever it costs banks to operate it is covered by reduced cost of dealing with cash/consumer.


Fast, free, and frictionless payments allow the economy to run better. That's better for the government and the people. Only corporations like Visa and Mastercard lose.

It also can be Go issue. Garbage collector did not have time to free memory.

There is still more trade with Russia than many countries in the list. Even Syria and Iran got tariffs.

> Yes, that, too. But you are forgetting that React makes all that opimizing work necessary in the first place.

Isn't the runtime state optimization the only responsibility of React. It's a library. The rest goes for Vite, Deno et al.


To be honest, I am very confused with this benchmark. It is misleading.

What is the actually size of the production build portion only for that button part? Because I think that the ShadCN button source code is not equal in size for the button that client downloads in production environment. Especially if you have SSR.


If you look at the demo, all of the payload comes from react and the tailwindcss classes that the shadcn button refers to.

It's dishonest to call this the payload of "one shadcn button" since it's basically all react/tailwindcss fixed cost and not literally a shadcn button.

But still, that's a decently broad demo to fit in a small payload, so the exaggeration kinda takes away from that.

The main thing I care about in client development is the state management solution since that's where most of the complexity comes from, and it's what makes or breaks an approach. In React, I use MobX which I see as the holy grail.

Whether Nue is nice to use or not for me is gonna come down to how nice this is to work with: https://nuejs.org/docs/interactivity.html


> It's dishonest to call this the payload of "one shadcn button" since it's basically all react/tailwindcss fixed cost and not literally a shadcn button.

Does the ShadCN button work without paying that cost?


Sure. If you just want the shadcn button by itself, it will generate this html: <button class="{tailwindcss classes}" />.

And it has a dependency on some common tailwindcss classes that will get injected into your bundle.

Most shadcn components depend on tailwindcss classes, and how the whole shtick works is that tailwindcss only includes in your bundle the classes that your components use across your app. Which is kind of a clever integration for a ui component 'package manager' for reducing bundle size.

But most importantly, consider that OP's demo has very minimal CSS because they aren't using a CSS framework, and that has nothing to do with their Nue framework. It's not like their Nue framework comes with an optimized answer to tailwindcss/shadcn; you have to bring your own solution.

So if you use tailwindcss/shadcn with React, you'd certainly use it with Nue.

What Nue should do instead is add libraries to either side necessary to reach parity with the other side. Nue has built-in routing, so it would be fair to add react-router-dom to the React side. And they wouldn't have 100 people calling them out for the dumb benchmark.


Are you really going to build a site which just consists of a button?


If I'm working with a payload budget and I'm using React I guess so?


Stop. You really do not need to be acting like this.


Your question is, incredibly, more dishonest than the original claim. Truly impressive.


What's dishonest about my question? Please keep your personal attacks to yourself.


Seems like you should be correct. A shadcn button is just react, tailwind, and @radix/react-slot. But if you simply create a new shadcn Next.js template (i.e. pnpm dlx shadcn@latest init) and add a button, the "First Load JS" is ~100kB. Maybe you could blame that on Next.js bloat and we should also compare it to a Vite setup, but it's still surprising.


Yeah, but my point is that you download the runtime and core of React/Tailwind just once for the whole web page and those should be removed from the test, or at least there should be comparison which includes the both cases.

You only need couple images on your webpage and that runtime size becomes soon irrelevant.

So the question is, that how much overhead are React/Tailwind CSS adding beyond that initial runtime size? If I have 100 different buttons, is it suddenly 10 000 kilobytes? I think it is not. This is the most fundamental issue on all the modern web benchmarking results. They benchmark sites that are no reflecting reality in any sense.

These frameworks are designed for content-heavy websites and the performance means completely different thing. If every button adds so much overhead, of course that would be a big deal. But I think they are not adding that much overhead.


> Yeah, but my point is that you download the runtime and core of React/Tailwind just once for the whole web page and those should be removed from the test, or at least there should be comparison which includes the both cases.

You think a test that is comparing the size of apps that use various frameworks should exclude the frameworks from the test? Then what is even being tested?


Actual overhead when the site is used in reality? How much ovearhead are those 100 different buttons creating? What is the performance of state managing? What is the rendering performance in complex sites? How much size overhead are modular files adding? Is .jsx contributing more than raw HTML for page size? The library runtime bundle size is mostly meaningless, unless you want to provide static website with just text. And then you should not use any of these frameworks.


Depends on if the free trial is misused here somehow. I don’t know if it is.


There is no circumvention of any access controls.


Due to the over-broad nature of the DMCA, almost anything is an access control that you aren't allowed to circumvent.


> 1. The beneficial effect of fluoride occurs only when fluoride is applied externally, in contact with the tooth enamel

I think you are kinda misusing science/not science arguments.

This is indeed the scientific reason why there is flouride in the water. It is also scientific reason why some countries removed it.

In some countries people take care of their teeth on average and in other countries not so much. So there is science for why fluoridation happens. You can read many articles about the fluoride benefits for teeth and what is the impact of teeth for overall health.


It must form the search index somehow. That is prior the human action. Simply it would not find the page at all if it respects.


I remember in late 90s/early 2000 as a teen going to robots.txt to specifically see what they were trying to hide and exploring those urls.

What is the difference if I use a browser or a LLM tool (or curl, or wget, etc) to make those requests?


But how did you find those sites that had the robot.txt to begin with? LLM must somehow find the existence of those pages and store that information before they can crawl them further or mark as acceptable source.


I am a human so I can visit other sites with links or from word of mouth or business cards or literally anywhere?

LLM finds out about it from me, when I ask it to go to the link.

You don’t accuse browsers of “somehow find[ing] the existence of those pages”. How does a browser know what page to visit?

The user tells it to.

If I prompt an LLM “go to example.net and summarize the page” how is that any different from me typing example.net in a browser URL bar?


That is certainly true. But that is not how these work 99% of the time. This post was originated by "search".


I think a distinction needs to be made between ingesting for LLM training and ingesting / crawling because a human asked it to during an inference session.

I have been talking about the latter, agree the former is abusive.


careful, some of those are honey pots or trip wires


Let's say you had a local model with the ability to do tool calls. You give that llm the ability to use a browser. The llm opens that browser, goes to Google or Bing, and does whatever searches it needs to do.

Why would that be an issue?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: