Hacker Newsnew | past | comments | ask | show | jobs | submit | more Defletter's commentslogin

It may be due to the async/io changes which may have caused a wave of renewed interest in the language


tbf, it does require a technique otherwise you risk just pushing plaque underneath your gums


The level of knee-jerk reaction to anything Rust into traditionally C projects borders on the pathological. That email is about as polite as it gets without being coddling.


Do keep in mind that a lot of the people involved in these sorts of things are neurodiverse in some ways, and may have significant trouble dealing with change.

As teh64 helpfully pointed out in https://news.ycombinator.com/item?id=45784445 some hours ago, 4ish years ago my position on this was a total 360 and I'd have had the same reaction to now-me's proposal.


All these changes requires work. Because of this, other priorities will get less attention. It would be ironic if bad security flaws are missed/introduced because of all the work switching to Rust. Its also very likely that all the new code written in Rust will be far less mature than the existing source bases. So the outcome might be (very probably actually) a lot of work to worsen security.

Most of the academic research into these sorts of typesafe languages usually returns the null result (if you don't agree, it means you haven't read the research on this topic). That's researcher for it didn't work and you shouldn't be using these techniques. Security is a process, not a silver bullet and 'just switch to Rust' is very silvery bullet.


It's not like I'm in a hurry to switch to Rust and will spend full steam on it. It's amongst the lowest priority items.

A lot of the Rust rewrites suffer a crucial issue: they want a different license than what they are rewriting and hence rewrite from scratch because they can't look at the code.

But here we're saying: Hey we have this crucial code, there may be bugs hidden in it (segfaults in it are a recurring source of joy), and we'll copy that code over from .cc to .rs and whack it as little as possible so it compiles there.

The problem is much more there on the configuration parser for example which does in a sense desparately need a clean rewrite, as it's way too sloppy, and it's making it hard to integrate.

In an optimal world I'd add annotations to my C++ code and have a tool that does the transliteration to Rust at the end; like when the Go compiler got translated from C to Go. It was glorious.


*180, for other people confused by this.


/me hides in shame


How does Bun avoid this? Or is it more that Bun provides things that you'd otherwise need a dependency for (eg: websockets)?


From a link mentioned elsewhere in the thread:

> Unlike other npm clients, Bun does not execute arbitrary lifecycle scripts for installed dependencies, such as `postinstall` and `node-gyp` builds. These scripts represent a potential security risk, as they can execute arbitrary code on your machine.

https://bun.com/docs/guides/install/trusted

I've also found the Bun standard library is a nice curated set of features that reduces dependencies.


Hmmm, it still has a pretty extensive default list of permitted npm packages, which wouldn't necessarily be a problem if there were a way to disable it, but I can't seem to find it.


the latter is what i was getting at yeah. updated list of standard library-esque functions implemented in native code so the need to reach to npm for a dependency happens far less often.


Presumably, a reset is resetting to a browser's defaults, whereas a normaliser is about establishing a cross-browser default. I haven't done much web-dev in recent years, but I vividly remember the same page looking different in different browsers, particularly prior to HTML5.


None of that matters. Just set the properties you want the element you are using to what you want it to be. No need to think about any other elements except the ones you use and, when you set a property, it will be the same across all browsers. No need to give it a name or import it into your style sheet.

No thinking involved at all outside of your normal design method. No need to investigate the latest trends or activities by some online guy. Just do what you do.


> Just set the properties you want the element you are using to what you want it to be.

"Just". What I remember from that time was putting a button at relative 0,0 and it being at the top-left of the page in one browser, but was offset in another because that browser was adding padding/margin to <body>. I cannot say which was "correct" but it nonetheless pushed me to use normalisers, which prevented this kind of problem from ever coming up again.


> a reset is resetting to a browser's defaults

No, a browser's defaults are, well, its defaults. One doesn't reset to them.

I think the line between a normalizer and reset stylesheet is _very_ fine, if there even is a line. A normalizer is probably _slightly_ more opinionated than a reset stylesheet. In the end, the difference isn't really important. If you need a reset stylesheet, normalizer will probably do just as well.


The line is not fine. Resets don't apply any styling, normalizers do. Normalizers keep the overall styling, like the margins of paragraphs and set them to an arbitrary value so all browsers will act the same. A reset is usually just: * { all: unset; }


Really wish people would just bite the bullet and do configuration as code instead of trying to make all these config petlangs.


I appreciate that the ts/js ecosystem seems to be moving in this general direction.

Lots of config.json is being replaced by the nicer config.ts.


I really dislike it when a turing-complete language is used for configuration. It almost always breaks every possibility to programmatically process or analyze the config. You can't just JSON.parse the file and check it.

Also I've been in projects where I had to debug the config multiple levels deep, tracking side-effects someone made in some constructor trying to DRY out the code. We already have these issues in the application itself. Lets not also do that in configurations.


That's why Starlark exists.

You need something between JSON/YAML and Python/JavaScript.

A config language makes the possibility space small.

It also makes it deterministic for CI and repeatable builds.

It also makes it parallelizable and cacheable.

Don't use your language for config. People will abuse it. Use a config language like Starlark or RCL.


> It almost always breaks every possibility to programmatically process or analyze the config. You can't just JSON.parse the file and check it.

Counterpoint: 95% of config-readers are or could be checked in with all the config they ever read.

I have yet to come across a programming language where it is easier to read + parse + type/structure validate a json/whatever file than it is to import a thing. Imports are also /much/ less fragile to e.g. the current working directory. And you get autocomplete! As for checks, you can use unit tests. And types, if you've got them.

I try to frame these guys as "data values" rather than configuration though. People tend to have less funny ideas about making their data 'clean'.

The only time where JSON.parse is actually easier is when you can't use a normal import. This boils down to when users write the data and have practical barriers to checking in to your source code. IME such cases are rare, and most are bad UX.

> Side effects in constructors

Putting such things in configuration files will not save you from people DRYing out the config files indirectly with effectful config processing logic. I recently spent the better part of a month ripping out one such chimera because changing the data model was intractable.


This is what's nice about Pkl, you define a schema as a Pkl file, you define a value of that schema as a Pkl file that imports the schema, `pkl eval my file.pkl` will do the type check and output yaml for visual inspection or programmatic processing, but keeping it to one file per module means that I almost never obsessively D-R-Y my Pkl configs.

Actually that's not the biggest benefit (which is tests for schemas) but it's nice to have the “.ts” file actually log the actual config as JSON and then the app consumes it as JSON, rather than importing the .ts file and all its dependencies and having weird things like “this configuration property expects a lambda.”


I still have to see a JS project where the config for each tool could not be something simple like `.toolrc`. We could have some markers to delineate plugins config.

Instead, there’s a another software in the configuration of sample projects, instead of just using good code organization and sensible conventions.


When Python projects used that approach (setup.py files) that meant to just know what a package's dependencies were, arbitrary code had to be run. Now it's pyproject.toml


pyproject.toml calls into a build backend which is... Python.

It is good to have a simple, declarative entry point to the build system which records declarative elements of the build. The non-declarative elements of the system are configuration-as-code.


My preference is towards simpler formats like:

  option value
Easy to edit and manipulate. JSON and YAML is always a nightmare if it's user facing. As for ansible, I'd love to see some scheme/lisp variants.


then how you distinguish between string "52" and number 52 ?

Keep adding more edge cases and you have something resembles JSON


Why do you need to differentiate between the two as an input? It’s config, not random data. If you have

  email test@example.com
  logo-size 100
  background #adadaf
  modules auth
  modules db
  modules files
The only reason to have special token here is if you multiple line values. Types are not a concern.


i don't. and neither my perl-based softwares. there should not be the possibility whereas a given parameter can have both string or a numeric value too at the configuration level which the user interfaces with - as of the "real-world analogy" programming paradigm suggests. json and stuff still do have their place but in a lower, machine-to-machine layer.


Exactly. Emacs Lisp is an existence proof that this can be done well.


You beat me to it!

And for those that haven't taken a look at it, the "customize" menu and everything it supports is silly impressive. And it just writes out the results out, like a boss.*

* Obviously, it has a lot of "don't edit below this line" complexity to it. But that doesn't change that it is right there.


Config as code suffers from two big problems:

- Turing completeness means that you have to deal with the halting problem, meaning you can't statically ensure that a program ever completes. This is really shit when dealing with config, one buggy while loop or infinite recursive function and stuff just grinds to a halt with no good way of debugging it. Having this problem at the config level might mean that your program never even gets to properly start up, so you never get to setup the logging / otel or whatever you usually use to catch those problems.

- Normal programming languages have side effects and are therefor insecure! They can usually read and write files anywhere, open sockets, send traffic over the internet, etc. These are all properties you don't want of a config language! Especially if you can import code from other modules, a single import statement in a "config file" is now a huge security risk! This is why "npm" keeps having security nightmares again and again and again.

So what you want from a config language is not the same thing as from a programming language, you want as much power as you can get without "Turing completeness" and without any "side effects". That's the reason we have stuff like HCL and whatever the article used as an example.


This year I started using an SQLite file specifically for config values

Have used everything from Json to Cue and in-between. Tired of the context switch. Need to use SQL anyway. Fewer dependencies overall required.


Curious - how do you version the config?


I'm guessing they version a SQL file


Yes. Git log is a handy thing for versioning.

I never relied on it for developer notes. Just arguing semantics in those cases.


Yes, though languages need to develop and provide restricted execution modes for "configuration as code" for security enforcement.


Can't speak for everyone, but I think a substantial part of the shift from Maven to Gradle was the ability to write build scripts: you didn't need to write a plugin. I'm hoping that Maven (and Gradle) can take advantage of JEPs 458 and 512 to allow people to write build scripts for that Java projects in Java.

- https://openjdk.org/jeps/458

- https://openjdk.org/jeps/512


Ant had the ability to write build scripts. It's part of what made Ant such a terrible build tool and IMO it's what makes Gradle such a terrible build tool.

Maven's requirement that you write plugins meant that you had to have a decent understanding of the conventions that Maven brought to the table before you could inflict some monster on the world.

In Gradle you can do something quick to get your idea working how ever terrible it is and however much it defies convention or breaks the expectations of anyone coming to the project later on.


While I'm unsure about the efficacy of LLMs, I do yearn for language tooling that lets you 'Bring Your Own Syntax'. I'm someone who prefers TypeScript, Java, and Zig's syntax and genuinely, genuinely struggles with Go, Crystal, Kotlin's syntax. Whoever came up with := versus = needs to stub their toe at least once a day for the rest of time. But if I could write code for Go using a different syntax, I'd write way more Go code. I feel like that's what petlangs like Borgo (https://github.com/borgo-lang/borgo) and AGL (https://github.com/alaingilbert/agl) are doing: making Go less goey.


Question, does that work with other types? Say you have two u16 values, can you concatenate them together with ~ into a u32 without any shifting?


It works with arrays (both fixed size, and dynamically sized) and arrays; between arrays and elements; but not between two scalar types that don't overload opBinary!"~", so no it won't work between two `ushorts` to produce a `uint`


No, it doesn't. But I'm not sure that this matter, a sufficiently "smart" compiler understand that this is the same thing.


I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit because then they can gleefully point at it and pretend that any first-party package manager for C/C++ would inevitably result in the same, nevermind the other languages that do not have this issue, or have it to a far, far lesser extent. Do these cultists just not use dependencies? Are they just [probably inexpertly] reinventing every wheel? Or do they use system packages like that's any better *cough* AUR exploits *cought*. While dependency hell on nodejs (and even Rust if we're honest) is certainly a concern, it's npm's permissiveness and lack of auditing that's the real problem. That's why Debian is so praised.


What makes me a C++ "cultist"? I like the language, but I don't think it's a cult. And yes, they do implement their own wheel all the time (usually expertly) because libraries are reserved for functions that really need it: writing left pad is really easy. They also use third-party libraries all the time, too. They just generally pay attention to the source of that library. Google and Facebook also publish a lot of C++ libraries under one umbrella (abseil and folly respectively), and people often use one of them.


STOP SAYING CULTIST! The word has very strong meaning and does not apply to anyone working with C or C++. I take offense at being called a cultist just because I say C++ is not nearly as bad as the haters keep claiming it is - as well I should.


> Or do they use system packages like that's any better cough AUR exploits cought.

AUR stands for "Arch User Repository". It's not the official system repository.

> I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit

I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.

My problem with language package managers is that people love them precisely because they don't want to learn how to deal with dependencies. Which is actually the problem: if I pull a random Rust library, it will itself pull many transitive dependencies. I recently compared two implementations of the same standard (C++ vs Rust): in C++ it had 8 dependencies (I can audit that myself). In Rust... it had 260 of them. 260! I won't even read through all those names.

"It's too hard to add a dependency in C++" is, in my opinion, missing the point. In C++, you have to actually deal with the dependency. You know it exists, you have seen it at least once in your life. The fact that you can't easily pull 260 dependencies you have never heard about is a feature, not a bug.

I would be totally fine with great tooling like cargo, if it looked like the problem of random third-party dependencies was under control. But it is not. Not remotely.

> Do these cultists just not use dependencies?

I choose my dependencies carefully. If I need a couple functions from an open source dependency I don't know, I can often just pull those two functions and maintain them myself (instead of pulling the dependency and its 10 dependencies).

> Are they just [probably inexpertly] reinventing every wheel?

I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.

Would it make me more of an expert if I was pulling, running and distributing random code from the Internet without having the smallest clue about who wrote it?

Do I need to complain about how hard CMake is and compare a command line to a "magic incantation" to be considered an expert?


> AUR stands for "Arch User Repository". It's not the official system repository.

Okay... and? The point being made was that the issue of package managers remains: do you really think users are auditing all those "lib<slam-head-on-keyboard>" dependencies that they're forced to install? Whether they install those dependencies from the official repository or from homebrew, or nix, or AUR, or whatever, is immaterial, the developer washed their hands of this, instead leaving it to the user who in all likelihood knows significantly less than the developers to be able to make an informed decision, so they YOLO it. Third-party repositories would not exist if they had no utility. But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted. Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.

> "It's too hard to add a dependency in C++"

It's not hard to add a dependency. I actually prefer the dependencies-as-git-submodules approach to package managers: it's explicit and you know what you're getting and from where. But using those dependencies is a different story altogether. Don't you just love it when one or more of your dependencies has a completely different build system to the others? So now you have to start building dependencies independently, whose artefacts are in different places, etc, etc, this shouldn't be a problem.

> I, for one, do not love it when there is an exploit in a language package manager.

Oh please, I believe that about as much as ambulance chasers saying they don't love medical emergencies. Otherwise, why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm as if anyone is actually asking for that, instead of, say, what Zig or Go has? It's because of the cultism, and every npm exploit further entrenches it.


C++ usage has nothing to do with static/dynamic linking. One is a language and the other is a way of using libraries. Dynamic linking gives you small binaries with a lot of cross-compatibility, and static linking gives you big binaries with known function. Most production C++ out there follows the same pattern as Rust and Go and uses static linking (where do you think Rust and Go got that pattern from?). Python is a weird language that has tons of dynamic linking while also having a big package manager, which is why pip is hell to use and PyTorch is infamously hard to install.

Dynamic linking shifts responsibility for the linked libraries over to the user and their OS, and if it's an Arch user using AUR they are likely very interested in assuming that risk for themselves. 99.9% of Linux users are using Debian or Ubuntu with apt for all these libs, and those maintainers do pay a lot of attention to libraries.


> But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted.

So you do understand my point about AUR. AUR is like adding a third-party repo to your Debian configuration. So it's not a good example if you want to talk about official repositories.

Debian is a good example (it's not the only distribution that has that concept), which proves my point and not yours: this is better than unchecked repositories in terms of security.

> Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.

Nobody says that ever. Either you make up your cult just to win an argument, or you don't understand what C/C++ people say. The whole goddamn point is to have a trusted system repository, and if you need to pull something that is not there, then you do it properly.

Which is better than pulling random stuff from random repositories, again.

> I actually prefer the dependencies-as-git-submodules approach

Oh right. So you do it wrong, it's good to know and it will answer your next complaint:

> Don't you just love it when one or more of your dependencies has a completely different build system to the others

I don't give a damn because I handle dependencies properly (not as git submodules). I don't have a single project where the dependencies all use the same build system. It's just not a problem at all, because I do it properly. What do I do then? Well exactly the same as what your system package manager does.

> this shouldn't be a problem.

I agree with you. Call it a footgun if you wish, you are the one pulling the trigger. It isn't a problem for me.

> why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm

Where did I do that?

> It's because of the cultism, and every npm exploit further entrenches it.

It's because npm is a good example of what happens when it goes out of control. Pip has the same problem, and Rust as well. But npm seems to be the worse, I guess because it's used by more people?


Your defensiveness is completely hindering you and I cannot be bothered with that so here are some much needed clarifications:

> I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.

If you do neither of those things then did it ever occur to you that this might not be about YOU?

> I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.

Yeah, hi, no you didn't explain that. You're probably mistaking me for someone else in some other conversation you had. The only comment of yours prior to mine in the thread is you saying "I can use pkg-config just fine." And again, you're thinking that I'm calling YOU incompetent, or even that I'm calling you incompetent. But okay, I'm sure your code never has bugs, never has memory issues, is never poorly designed or untested, that you can whip out an OpenGL alternative whatever in no time and it be just as stable and battle-tested, and to say otherwise must be calling you incompetent. That makes total sense.

> AUR stands for "Arch User Repository". It's not the official system repository.

> So it's not a good example if you want to talk about official repositories.

I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.

--

Overall, getting bored of this, though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny. Have a nice day.


> Your defensiveness

Start by not calling everybody disagreeing with you a cultist, next time.

> I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.

It's not that it is unclear, it's just that it doesn't make sense. When we compare npm to a system package manager in this context, the thing we compare is whether or not is it curated. Agreed, I was maybe not using the right words (I should have said curated package managers vs not curated package managers), but it did not occur to me that it was unclear because comparing npm to a system package manager makes no sense otherwise. It's all just installing binaries somewhere on disk.

AUR is much like npm in that it is not curated. So if you find that it is a security problem: great! We agree! If you want to pull something from AUR, you should read its PKGBUILD first. And if it pulls tens of packages from AUR, you should think twice before you actually install it. Just like if someone tells you to do `curl https://some_website.com/some_script.sh | sudo sh`, no matter how convenient that is.

Most Linux distributions have a curated repository, which is the default for the "system package manager". Obviously, if users add custom, not curated repositories, it's a security problem. AUR is a bad example because it isn't different from npm in that regard.

> though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny

Well I did elaborate at least one bit, but I doubt you are interested in more details than what I wrote: "What do I do then? Well exactly the same as what your system package manager does."

I install the dependencies somewhere (just like the system package manager does), and I let my build system find them. It could be with CMake's `find_package`, it could be with pkg-config, whatever knows how to find packages. There is no need to install the dependencies in the place where the system package manager installs stuff: it can go anywhere you want. And you just tell CMake or pkg-config or Meson or whatever you use to look there, too.

Using git submodules is just a bad idea for many reasons, including the fact that you need all of them to use the same build system (which you mentioned), or that a clean build usually implies rebuilding the dependencies (for nothing) or that it doesn't work with package managers (system or not). And usually, projects that use git submodule only support that, without offering a way to use the system package(s).


> Start by not calling everybody disagreeing with you a cultist, next time.

You'd do very well as a culture war pundit. Clearly I wasn't describing a particular kind of person, no, I'm clearly I'm just talking about everyone I disagree with /s


So, not interested at all in how to deal with dependencies without git submodules, I reckon?

We can stop here indeed.


You misunderstand, I am already well aware. My comment about your lack of elaboration was not due to any ignorance on my part, but rather to point out how you assumed that and refused to elaborate anyway. The idea that I may have my reasons for preferring dependencies-as-git-submodules or their equivalents (like Zig's package system) never crossed your mind. Can't say I'm surprised. Oh well.


> The idea that I may have my reasons for preferring dependencies-as-git-submodules

Well, git submodules are strictly inferior and you know it: you even complained about the fact that it is a pain when some dependencies use different build systems.

You choose a solution that does not work, and then you blame the tools.


Okay, I'll bite: your proposed alternative to being able to specify exact versions of dependencies regardless of operating system or distro that I can statically include into a single binary, everything is project-local, guaranteed, is... what? Is it just "Don't"?


I'm not sure what you mean.

What I am saying is that using a dependency is formalised for build systems. Be it npm, cargo, gradle, meson, cmake, you name it.

In cargo, you add a line to a toml file that says "please fetch this dependency, install it somewhere you understand, and then use if from this somewhere". What is convenient here is that you as a user don't need to know about those steps (how to fetch, how to install, etc). You can use Rust without Cargo and do everything manually if you need to, it's just that cargo comes with the "package manager" part included.

In C/C++, the build systems don't come with the package manager included. It does not mean that there are no package managers. On the contrary, there are tons of them, and the user can choose the one they want to use. Be it the system package manager, a third-party package manager like conan or vcpkg, or doing it manually with a shell/python script. And I do mean the user, not the developer. And because the user may choose the package manager they want, the developer must not interfere otherwise it becomes a pain. Nesting dependencies into your project with git submodules is a way to interfere. As a user, I absolutely hate those projects that actually made extra work to make it hard for me to handle dependencies the way I need.

How do we do that with CMake? By using find_package and/or pkg-config. In your CMakeLists.txt, you should just say `find_package(OpenSSL REQUIRED)` (or whatever it is) and let CMake find it the standard way. If `find_package` doesn't work, you can write a find module (that e.g. uses pkg-config). A valid shortcut IMO is to use pkg-config directly in CMakeLists for very small projects, but find modules are cleaner and actually reusable. CMake will search in a bunch of locations on your system. So if you want to use the system OpenSSL, you're done here, it just works.

If you want to use a library that is not on the system, you still do `find_package(YourLibrary)`, but by default it won't find it (since it's not on the system). In that case, as a user, you configure the CMake project with `CMAKE_PREFIX_PATH`, saying "before you look on the system, please look into these paths I give you". So `cmake -DCMAKE_PREFIX_PATH=/path/where/you/installed/dependencies -Bbuild -S.`. And this will not only just work, but it means that your users can choose the package manager they want (again: system, third-party like conan/vcpkg, or manual)! It also means that your users can choose to use LibreSSL or BoringSSL instead of OpenSSL, because your CMakeLists does not hardcode any of that! Your CMakeLists just says "I depend on those libraries, and I need to find them in the paths that I use for the search".

Whatever you do that makes CMake behave like a package manager (and I include CMake features like the FetchContent stuff) is IMO a mistake, because it won't work with dependencies that don't use CMake, and it will screw (some of) your users eventually. I talk about CMake, but the same applies for other build systems in the C/C++ world.

People then tend to say "yeah I am smart, but my users are stupid and won't know how to install dependencies locally and point CMAKE_PREFIX_PATH to them". To which I answer that you can offer instructions to use a third-party package manager like conan or vcpkg, or even write helper scripts that fetch, build and install the dependencies. Just do not do that inside the CMakeLists, because it will most certainly make it painful for your users who know what they are doing.

Is it simpler than what cargo or npm do? No, definitely not. Is it more flexible, totally. But it is the way it is, and it fucking works. And whoever calls themselves a C/C++ developer and cannot understand how to use the system package manager, or a conan/vcpkg and set CMAKE_PREFIX_PATH need to learn it. I won't say it's incompetence, but it's like being a C++ developer and not understanding how to use a template. It's part of the tools you must learn to use.

People will spend half a day debugging a stupid mistake in their code, but somehow can't apprehend that dealing with a dependency is also part of the job. In C/C++, it's what I explained above. With npm, properly dealing with dependencies means checking the transitive dependencies and being aware of what is being pulled. The only difference is that C/C++ makes it hard to ignore it and lose control over your dependencies, whereas npm calls it a feature and people love it for that.

I don't deny that CMake is not perfect, the syntax is generally weird, and writing find module is annoying. But it is not an excuse to make a mess at every single step of the process. And people who complain about CMake usually write horrible CMakeLists and could benefit from learning how to do it properly. I don't love CMake, I just don't have to complain about it everywhere I can because I can make it work, and it's not that painful.


While I do appreciate you taking the time to write that, I am somewhat at a loss. How does this justify the antipathy towards notions of a first-party build system and package manager? That's how we got into this argument with each other: I was calling out C/C++ cultists who cling to the ugly patchwork of hacky tooling that is C/C++'s so-called build systems and decry any notion of a first-party build system (or even a package manager to boot) as being destined to become just like npm.

C/C++ developers clearly want a build system and package manager, hence all this fragmentation, but I can't for the life of me understand why that fragmentation is preferable. For all the concern about supply-chain attacks on npm, why is it preferable that people trust random third-party package managers and their random third-party repackages of libraries (eg: SQLite on conan and vcpkg)? And why is global installation preferable? Have we learnt nothing? There's a reason why Python has venv now; why Maven and Gradle have wrappers; etc. Projects being able to build themselves to a specification without requiring the host machine to reconfigure itself to suit the needs of this one project, is a bonus, not a drawback. Devcontainers should not need to be a thing.

If anything, this just reads like Sunk Cost Fallacy: that "it just works" therefore we needn't be too critical, and anyone who is or who calls for change just needs to git gud. It reminds me of the never-ending war over memory safety: use third-party tools if you must but otherwise just git gud. It's this kind of mindset that has people believing that C/C++'s so-called build systems are just adhering to "there should be some artificial friction when using dependencies to discourage over-use of dependencies", instead of being a Jenga tower of random tools with nothing but gravity holding it all together.

If it were up to me, C/C++ would get a more fleshed-out version of Zig's build system and package manager, ie, something unified, simple, with no central repository, project-local, exact, and explicit. You want SQLite? Just refer to SQLite git repository at a specific commit and the build system will sort it out for you. Granted, it doesn't have an official build.zig so you'll need to write your own, or trust a premade one... but that would also be true if you installed SQLite through conan of vcpkg.


> How does this justify the antipathy towards notions of a first-party build system and package manager?

I don't feel particularly antipathic towards notions of first-party build system and package manager. I find it indeniably better to have a first-party build system instead of the fragmentation that exists in C/C++. On the other hand, I don't feel like asking a 20-year old project to leave autotools just because I asked for it. Or to force people to install Python because I think Meson is cool.

As for the package manager, one issue is security: is it (even partly) curated or not? I could imagine npm offering a curated repo, and a non-curated repo. But there is also a cultural thing there: it is considered normal to have zero control over the dependencies (my this I mean that if the developer has not heard of dependencies they are pulling, then it's not under control). Admittedly it is not a tooling problem, it's a culture problem. Though the tooling allows this culture to be the norm.

When I add a C/C++ dependency to my project, I do my shopping: I go check the projects, I check how mature they are, I look into the codebase, I check who has control over it. Sometimes I will depend on the project, sometimes I will choose to fork it in order to have more control. And of course, if I can get it from the curated list offered by my distro, that's even better.

> C/C++ developers clearly want a build system and package manager, hence all this fragmentation

One thing is legacy: it did not exist before, many tools were created, and now they exist. The fact that the ecosystem had the flexibility to test different things (which surely influenced the modern languages) is great. In a way, having a first-party tool makes it harder to get that. And then there are examples like Swift where is slowly converged towards SwiftPM. But at the time CocoaPods and Carthage were invented, SwiftPM was not a thing.

Also devs want a build system and package manager, but they don't necessarily all want the same one :-). I don't use third-party package managers for instance, instead I build my dependencies manually. Which I find gives me more control, also for cross-compiling. Sometimes I have specific requirements, e.g. when building a Linux distribution (think e.g. Yocto or buildroot). And I don't usually want to depend on Python just for the sake of it, and Conan is a Python tool.

> why is it preferable that people trust random third-party package managers and their random third-party repackages of libraries (eg: SQLite on conan and vcpkg)?

It's not. Trusting a third-party package manager is actually exactly the same as trusting npm. It's more convenient, but less secure. However it's better when you can rely on a curated repository (like what Linux distributions generally provide). Not everything can be curated, but there is a core. Think OpenSSL for instance.

> And why is global installation preferable?

For those dependencies that can be curated, there is a question of security. If all your programs on your system link the same system OpenSSL, then it's super easy to update this OpenSSL when there is a security issue. And in situations where what you ship is a Linux system, then there is no point in not doing it. So there are situations where it is preferable. If everything is statically link and you have a critical fix for a common library, you need to rebuild everything.

> If it were up to me

Sure, if we were to rebuild everything from scratch... well we wouldn't do it in C/C++ in the first place, I'm pretty sure. But my Linux distribution exists, has a lot of merits, and I don't find it very nice when people try to enforce their preferences. I am fine if people want to use Flatpak, cargo, pip, nix, their system package manager, something else, or a mix of all that. But I like being able to install packages on my Gentoo system the way I like, potentially modifying them with a user patch. I like being able to choose if I want to link statically or dynamically (on my Linux, I like to link at least some libraries like OpenSSL dynamically, if I build an Android apk, I like to statically link the dependencies).

And I feel like I am not forcing anyone into doing what I like to do. I actually think that most people should not use Gentoo. I don't prevent anyone from using Flatpak or pulling half the Internet with docker containers for everything. But if they come telling me that my way is crap, I will defend it :-).

> I am somewhat at a loss.

I guess I was not trying to say "C/C++ is great, there is nothing to change". I just think it's not all crap, and I see where it all comes from and why we can't just throw everything away. There are many things to criticise, but many times I feel like criticisms are uninformed and just relying on the fact that everybody does that. Everybody spits on CMake, so it's easy to do it as well. But more often than not, if I start talking to someone who said that they cannot imagine how someone could design something as bad as CMake, they themselves write terrible CMakeLists. Those who can actually use CMake are generally a lot more nuanced.


Even though I understand why you prefer that, I feel like you're painting too rosy of an image. To quote Tom Delalande: "There are some projects where if it was 10% harder to write the code, the project would fail." I believe this deeply and that this is also true for the build system: your build config should not be rivalling your source code in terms of length. That's hyperbole in most cases, sure, and may well indicate badly written build configs, but writing build configs should not be a skill issue. I am willing to bet that Rust has risen so much in popularity not just because of its memory safety, but also because of its build system. I don't like CMake, but I also don't envy its position.


> but writing build configs should not be a skill issue

I think it shouldn't be a skill issue because a true professional should learn how to do it :-).

My build configs are systematically shorter than the bad ones.

Also I feel like many people really try to have CMake do everything, and as soon as you add custom functions in CMake, IMO you're doing it wrong. I have seen this pattern many times where people wrap CMake behind a Makefile, presumably because they hate having to run two commands (configure/build) instead of one (make). And then instead of having to deal with a terrible CMakeLists, they have to deal with a terrible CMakeLists and a terrible Makefile.

It's okay for the build instructions to say: "first you build the dependencies (or use a package manager for that), second you run this command to generate the protobuf files, and third you build the project". IMO if a developer cannot run 3 commands instead of one, they have to reflect on their own skills instead of blaming the tools :-).


> I think it shouldn't be a skill issue because a true professional should learn how to do it :-)

Therein lies the issue, in my opinion: I do not believe that someone should have to be a "true professional" to be able to use a language or its tooling. This is just "git gud" mentality, which as we all [should] know [by now] cannot be relied upon. It's like that "So you're telling me I have to get experience before I get experience?" meme about entry-level jobs: if you need to "git gud" before you can use C/C++ and its tooling properly, all that means is that they'll be writing appalling code and build configs in the mean time. That's bad. Take something like AzerothCore: I'd wager that most of its mods were made by enthusiasts and amateurs. I think that's fine, or at least should be, but I'm keenly aware that C/C++ and its tooling do not cater to, nor even really accommodate amateurs (jokey eg: https://www.youtube.com/watch?v=oTEiQx88B2U). That's bad. Obviously, this is heading into the realm of "what software are you trusting unwisely", but with languages like Rust, the trust issue doesn't often include incompetence, more-so just malice: I do not tend to fear that some Rust program has RCE-causing memory issues because someone strlen'd something they shouldn't.


> It's like that "So you're telling me I have to get experience before I get experience?"

Not at all. I'm not saying that one should be an architect on day one. I'm saying that one should learn the basics on day one.

Learning how to install a package on a system and understanding that it means that a few files were copied in a few folders is basic. Anyone who cannot understand that does not deserve to be called a "software engineer". It has nothing to do with experience.


> I'm saying that one should learn the basics on day one.

Except that C/C++ have entirely incongruous sets of basics compared to modern languages, which people coming to C/C++ for the first time are likely to have a passing familiarity with (unless it's their first language, of course). Yes, cmake configs can be pretty concise when only dealing with system packages, but this assumes that developers will want to do that, or whether they'll want to replicate the project-localness ideal, which complicates cmake configs. We're approaching this from entirely different places and is reminding me of the diametrically-opposed comments on this post (https://news.ycombinator.com/item?id=45328247) about READMEs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: