I'm working with SBoM, one fun side effect is that you can scan SBoM's for vulnerabilities. Suddenly hackers, your customers and your competitors starts do to this and you need to make sure your third party dependencies are updated.
This reveals the cost of dependencies (that often are ignored).
I hope that we in the future will have a more nuanced discussion on when it's okay to add a dependency and when you should write from scratch.
I also switch between a lot of computers (work computer at home/work computer at work) but have to develop on "big powerful machine at work". My current solution is tmux + nvim and it works really good. I can just pickup a session from whatever computer I'm in front of at the moment.
Am I correct in that neither Zed nor VS Code support this usecase yet?
I use VSCode + SSH remote for this and works great. The only nitpick I have is needing to manually reconnect when I suspend my laptop and ssh connection breaks. It's a separate session though, which doesn't matter to me but may be a deal breaker for you.
I use Tailscale for a personal VPN so the beefy workstation is always securely available from my laptop, even when across the pond).
There's no input delay in VSCode (editor, ui) because the UI is local. Delay in saving/reading/sesrching in files is not noticable for me.
(edit to explain: VSCode is still running locally, but it also installs the server-side (headless) component on the remote machine. That way editing files is local/fast, but stuff like running the code, search/replace/etc also works fadt because it's handled by the serverside).
Terminal (incl vscode terminal) feels slightly sluggish, and it's noticable if the server is in another country and uncomfortable if across the pond.
The input delays are very dependent on where the server is and what it is doing. If the server is idle and close by (ping wise) delay is virtually indistinguishable from local VSCode. If I'm connecting to the server via a VPN in a different country while stressing all the cores with some background compiling or number crunching work, input delay gets quite noticeable.
I multiplex my ssh connections so the workflow is just ssh sgain, then reload VSCode window. If Mosh could multiplex these (and paper over the connection problems) that'd be great but after a cursory look, it doesn't look like it's possible.
It's a minor thing tho.
In general, I quite like Mosh! If I routinely had to work on faraway servers I'd use mosh just for its smart local echo.
Not persistent sessions, but VS Code can run the GUI locally and connect to a remote server. When you reconnect it opens all your tabs, workspace settings etc.
I strongly disagree. You should always keep the code as simple as possible and only add abstractions once you really need them.
Too many times I've found huge applications that it turns out be most scaffolding and fancy abstractions without any business logic.
My biggest achievement is to delete code.
1. I've successfully removed 97% of all code while adding new features.
2. I replaced 500 lines with 17 lines (suddently you could fit it on a screen and understand what it did)
Personally I don't see the difference between this and submodules. Repo stores the information in xml files, vdm stores it in yaml files and git submodules in the git database. I don't really care.
The real headache for me is the trouble of traceability vs ease of use. You need to specify your dependencies with a sha1 to have traceable SLSA compliant builds, but that also means that you'll need to update all superrepos once a submodule is updated. Gerrit has support for this, but it's not atomic, and what about CI? What about CI that fails?
I care about the aesthetics and the convenience that the tool provides. git-repo at least has a simple command to get all the latest stuff (repo sync). Git submodules is a mess in this regard. Just look at this stack overflow thread:
People are confused at how to do THE most basic command that you’d have to do every single day with a multi-repo environment. There’s debating in the comments about what flags you should actually use. No thanks.
There’s a lot of room for improvement in this space. git-repo isn’t widely used outside of aosp. Lots of organizations are struggling with proper tooling for this type of setup.
Also, the discussions are there because it's been more than a decade and the options have evolved over time.
Submodules are a bit clunky but the problem it solves is itself clunky. Bringing in another tool doesn't really feel like its going to reduce the burden.
I have yet to be in a situation where I blindly want to update all submodules. It is a conscious action, X has updated and I want to bring that change(s) in.
cd submodule, update, test, commit.
I haven't seen anything in this thread that really motivates me to learn another bespoke tool just for this. I'm sure it varies for different projects though.
Fast forward 15 years and see how the tooling this thread has been evolved and how many different tools people will have used and compare that to the stackoverflow post. I'm more inclined to invest time in git itself.
This is fine until you're working with hundreds of other developers. I believe the reason solutions like this exist is to abstract git away from most devs, because in (my experience) many enterprise devs have only rudimentary git knowledge.
Sure, the devs should "just learn git" - but the same argument applies to a lot of other tech nowadays. Ultimately most folks seem to want to close their ticket off and move to the next one.
Git submodules and git subtrees generally do not fit my org's needs - we have internal tooling similar to this. Happy to expand on that if you have questions.
The risk with that approach is that every other of the hundreds of developers will bring their own tool for X. So now you have hundreds of tools and everyone only knows a subset.
If there is a common operation that people get wrong or don't use often enough but still need to run regularly a five-line bash script will not only do the job it will actively help them learn the tool they are using.
The beauty with a static site is that if you have a copy of the generated output, using the site generator becomes optional. You can always move away from using it (but not moving back).
This means that a "custom buggy sitegenerator" is never a bad dependency to have.
Git is actually using that approach which means that libgit is pretty useless to embed, which noone does anyway since it's GPL and everyone instead uses libgit2.
Sourcehut is great and inspired a lot of my design choices. Although unfinished there is a plugin (ayllu-mail) in place to support email based workflows. Ayllu is meant to be lightweight, hackable, and oriented towards individuals or small software communities (think Github/Forgejo organizations). Although you can self-host, Sourcehut is very large and oriented towards supporting thousands of users. My original idea was actually to fork Sourcehut and create something called Minihut.. but it turned out to be not very fun and I ended up with this project.
This reveals the cost of dependencies (that often are ignored).
I hope that we in the future will have a more nuanced discussion on when it's okay to add a dependency and when you should write from scratch.