Hacker Newsnew | past | comments | ask | show | jobs | submit | forty's commentslogin

Microsoft asking to upgrade hardware reminds me of that old joke (from memory so excuse the bad story telling)

User: hello, my PC smokes and I would like to purchase an anti smoke software

Computer service: sorry it's not possible, you have to replace the hardware

User: no I really want an anti smoke software

(Later)

User: hello I would like to purchase a new computer

Service: see, I told you that an anti smoke software is not possible

User: wrong! I have purchased one from Microsoft. But apparently it's not compatible with my current hardware


How many lines of code is there in the biggest codebase in the world?

That are two things in the article: having a kind of make alternative to "save your command history" and basically avoiding repeating large commands and how they use TS to make shell scripts.

In the web/js/ts ecosystem, most people use npm scripts in package.json, rather than a custom make.ts. Scripts you launch from there can be in any language, so nothing prevents you from using TS shell scripts if that's your thing.

Another quite standard way of savings your command history in a file that I have seen used in all ecosystems is called "make", which even saves you a few characters when you have to type it, and at least people don't have to discover your custom system, have auto complete work out of the box, etc


The main downside to putting scripts into package.json (or NX's project.json) is that you have to wrap it in JSON. Which is fine for simple commands, but when you start adding stuff like quotes or multi-command commands it starts to get a bit busy.

I quite like make or just as a task runner, since the syntax / indentation / etc overhead is a lot lower. I haven't yet tried to introduce it in any JS based projects though, because it adds yet another tool.


I put any sufficiently complex command to scripts/<command>.sh and keep the package json as light as possible.

One very big upside I have to use package.json is that we use pnpm which has very sophisticated way of targeting packages with --filter (like "run tests from packages that had modification compared to master and all their transitively dependents packages" which is often exactly what you want to do)


A pet peeve of mine is JS monorepo tools that only run package.json scripts.

Like yeah it's totally reasonable that they go that route, but please just let me pass a command that can be executed without having to wrap it in a package.json script


I don't know for others but pnpm has `pnpm exec` which allows running arbitrary commands on some or all of your packages

Deno has a similar tool to npm scripts called "tasks" in deno.json. It even has a nice mini-advantage in that it encourages including a one-line description which shows up in the `deno tasks` list of all configured tasks and various IDE integrations.

Most Deno tasks though, more so than a lot of npm scripts in my experience, tend to just be `deno run …` commands (the shebang line in the article) to a script in a directory like `_scripts/` rather than written as CLI commands.


I've started naming my scripts directory "run/" instead of "_scripts/" because it's been easier to type... favoring over the discoverability of the _scripts at the top of my editor's file sidebar. Also, VS Code will now look at the shebang for ts-node or deno and load it as .ts without a file extension... (yay).. so I can drop the .ts now.

So will generally just reference ./run/dbup, etc... where dbup will start the db via docker-compose.dev.yaml, then wait for the db to be ready, then run/up the grate task via compose as well and check/wait for that to succeed or fail. Usually have other dependencies load after db is ready (redis, mailhog, etc) ...

Definitely been favoring Deno for a while now... really easy to use with a shebang and direct module references over many/most other options that may require separate install steps. Me ~/bin/ is also full of them.


My monorepos have become increasingly multilingual over the years, often due to dependencies, and it's not uncommon to find a make file, cargo.toml, package.json, deno.json, venv + requirements.json, etc. all living in the same root.

Coming from a web background, my usual move is to put all scripts in the package.json, if present. I'd use make for everything, but it's overkill for a lot of stuff and is non-standard in a lot of the domains I work in.


> My monorepos have become increasingly multilingual over the years, often due to dependencies, and it's not uncommon to find a make file, cargo.toml, package.json, deno.json, venv + requirements.json, etc. all living in the same root.

Same!

Usual move used to put everything in Makefile, but after getting traumatized time and time again from ever-growing complexity, I've started to embrace Just (https://github.com/casey/just) which is basically just a simpler Make. I tend to work across teams a lot, and make/just seems easier for people to spot at a glance, than scripts inside of a package.json that mostly frontend/JavaScript/TypeScript people understand to take a look at.

But in the end I think it matters less specifically what you use, as long as you have one entrypoint that collects everything, could be a Makefile, Justfile or package.json, as long as everything gets under the same thing. Could be a .sh for all I care :)


Mise is also very nice (for dependencies and for scripts) https://mise.jdx.dev/

I've just started to assume I'm in an environment where shebang works and put my scripts to do repeated things under ./run/* ... generally bash if it's simple TS/Deno if it's more complex. Deno has been a joy for shell scripting.

Yeah, I don't go out of my way to accommodate Windows developers. I wouldn't go out of my way to hire them, either. Modern Windows is a corporate surveillance platform.

Deno is great, too. I use Bun where I can but Deno really removes a ton of friction.


Even in windows, happen to be working in a locked down environment without benefit of WSL/Docker even (pushing for it actively)... Even then the git tooling that installs includes bash (and other msys build nix tools), I've also got a bit in my ~/bin directory for shared usage as well, where available for windows.

So the same stuff still works even there. Even if still using C#, I'd rather not be working on Windows at this point, it's just so entrenched in a lot of work/business/govt environments in the Phoenix area.

Aside: haven't really used Bun at all, been exceedingly happy with Deno from pretty early on. The only thing I sorely miss is an MS-SQL adapter that works with it. Again, not my favorite by a long shot at this point.


Make is a very good choice for storing common maintenance commands for a project. We use it at work for this. It started when we migrated to Docker more than a decade ago - before docker-compose was a thing, building and running a set of containers required quite a bit of shell scripting, and we decided to use Make for that. Make is ubiquitous, cross-platform, the targets are essentially snippets of shell with some additional features/syntax added on top, there's a dependency system (you can naturally express things like "if you want to run X, you need to build Z and Y first, then X, then you can run it"), it allows for easy parameterization (`make <target> ARG=val`), plus it's actually Turing-complete language with first-class lambdas and capacity for self-modifying code[1]. And when some rule becomes too complex, it's trivial to dump it into `scripts/something.sh` and have Make call it. Rewriting the script in another language also works, and Make still provides dependencies between targets.

TL;DR: Make is a very nice tool for gathering the "auxiliary" scripts needed for a project in a language-agnostic manner. It's better than setup.py and package.json precisely because it provides a single interface for projects of both kinds.

[1] Which is worth knowing so you can avoid both features like the plague.


As an alternative solution to the sibling comment, I do run everything rootless in systemd --user so my services don't have access to privileged ports, and use firewall rules to redirect the external interface low ports, to the local high ports (that sounds annoying but in practice I only redirect a single port - 443 - to traefik and the use it to route to the right container service depending on domain)

Will this do remote attestation ? What hardware platforms will it support? (Intel sgx, AMD sev, AWS nitro?)

Systemd has recently added experimental support for musl libc, which should eventually allow Alpine to upgrade though

If they want to. Alpine is minimal. systemd is anything but. It's like the GNOME of inits.

Quadlet are great but running podman via systemd as a non root user worked perfectly well before quadlets and I have no idea what your parent is talking about (I'm currently in the process of converting my home services from rootless podman over systemd to quadlet)

Fair, it worked, but podman generate systemd is deprecated now. I found the generated unit files pretty brittle to maintain compared to just having a declarative config that handles the lifecycle.

I agree 100%, I was stuck without quadlet in previous Debian stable so I had to work with systemd generate, but quadlets are undoubtedly better, and I was looking forward to upgrade Debian just for that, and now that I did, I'm really happy to migrate. Especially custom container image management is so much smoother.

> Social media is objectively damaging to the interests of the ruling class,

Maybe double check who owns and controls most social media platforms, and then think a bit if you'd categorize them more in the ruling class or working class.


Why do you say podman is a poor replacement? It has been consistently a better replacement for me on Linux, with easy rootless, daemon less, quadlet, etc. And at work where I have to use macos, it works just as well.

Yeah, people are sleeping on Podman who is now genuinely leading the space now that docker-engine is all but in maintenance mode.

Quadlets are amazing and greatly simplify the deployment and management of containers.

The systemd integration is so good because you have this battle tested process manager with a gazillion features and you can use them with your containers for free.

Podman can run pods, hence the name, which is an abstraction that k8s has proven is useful but docker completely lacks.

Podman pushing k8s manifests as an (imho better) compose with podman play is refreshing. And it can be dropped in with Quadlets too.

Podman can generate your k8s manifests from your running containers. Get everything running how you like and save.

buildah frees you from Dockerfile and lets you build containers completely rootlesslessly.


The interfaces, CLI and Podman Desktop, are still not at parity. Podman contributors will be the first to tell you this.

That's not to say they aren't effective, or even good, at least for the CLI. They're just still catching up. It's not and shouldn't be a surprise considering the head start.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: