> huge uid didn't work in podman (like 1000000 I think)
You're running into the `/etc/sub[ug]id` defaults. The old default was to start your normal user at 100000 + 64k additional sub-IDs per user, but that changed recently when people at megacorps with hundreds of thousands of employees defined in LDAP and similar ran into ID conflict. Sub-IDs now start at 2^19 on RHEL10 for this reason.
Broadly, the claim that Podman is a drop-in replacement for Docker is true only for the simple cases, but people have developed assorted dependencies on Docker implementation details. Examples:
1. People hear about how great rootless is with Podman but then expect to be able to switch directly from rootful Docker to rootless Podman without changing anything. The only way that could work is if there was no difference between rootful and rootless to begin with, but people don't want to hear that. They combine these two selling points in their head and think they can get both a drop-in replacement for Docker and also rootless by default. The proper choice is to either switch from rootful Docker to rootful Podman *or* put in the work to make your container work in rootless, work you would also have had to do with rootless Docker.
2. Docker Compose started out as an external third-party add-on (v1) which was later rewritten as an internal facility (v2) but `podman compose` calls out to either `docker-compose` (i.e. v1) or to its own clone of the same mechanism, `podman-compose`. The upshot is a lot of impedance mismatch. Combine that with the fact that Podman wants you to use Quadlets anyway, resulting in less incentive to work on these corner cases.
3. Docker has always tried to pretend SELinux doesn't exist, either by hosting on Debian and friends or by banging things into place by using their privileged (rootful) position. Podman comes from Red Hat, and until recently they had Mr SELinux on the team. Thus Podman is SELinux-first, all of which combines to confuse transplants who think they can go on ignoring SELinux.
4. On macOS and Windows, both Podman and Docker need a background Linux VM to provide the kernel, without which they cannot do LXC-type things. These VMs are not set up precisely the same way, which produces migration issues when someone is depending on exact details of the underlying VM. One common case is that they differ in how they handle file sharing with the host.
For #2, it will also use docker compose v2 if available. Docker compose v2 is still a standalone binary even in the case of installing Docker (it's a plugin). As long as you download the v2 binary, put it in the correct location, the podman compose subcommand will invoke it.
> github.com…I don't want an 'impedance mismatch' between my system and their system
So give your contributors developer accounts on your Fossil instance, which is super-cheap to set up, being a single binary with nearly zero external dependencies. (Those being OpenSSL and zlib, which are table stakes these days.) My containerized build is a single static binary that compresses to ~3.5 MB, total, all-in.
If you're concerned over the lost promise of easy PRs from randos on the Internet, I question your premise. My experience is that below a certain project popularity level, there is less than one total full-time developer on the project, even counting all possible external committers. Below this threshold, why optimize for external contributors? If someone has a sufficiently valuable patch, they can deal with getting a Fossil repo login or sending a patch.
I've been the maintainer of a piece of software for coming on two decades that's in all popular package repos and have _never_ gotten a worthwhile PR for it via GitHub. I spend more time using their code commenting features explaining why this, this, and this make the change unacceptable, after which the PR submitter goes away rather than fix their patch. It's a total waste of time.
I did once upon a time get high-quality external contributions, but that was back when the project was hosted by Subversion, and it didn't matter that posting patches required more work than firing off a GH PR. People who have sufficient value to commit to a project will put up with a certain level of ceremony to get their code into the upstream project.
(To be fair, I expect the reason for the lack of quality external contributions is that the project is in some sense "done" now, needing only the occasional fix to track platform changes.)
If you are lucky enough to have an audience of outsiders who will provide quality contributions, Fossil does have a superior option for patches than unified diffs. See its "patch" and "bundle" commands. This lets your outsiders send a full branch of commits with comments, file renames/deletions, etc.
…kind of like a PR. :)
If you absolutely require integration with Git-based tooling, Fossil makes it easy to mirror your repo to GitHub, which you can treat as read-only.
Have hash collisions in Git ever been a problem for you?
What is the exact scenario in which a hash collision would be dangerous? (Like, you give some random person push access to your repository and... I'm really getting lost here... they override some commit with a different commit with the same hash? And that's somehow worse than them just creating a new commit with a different hash, which you would notice for sure? And the only reason you won't notice their Evil Change is because they sneaked it in inside a hash collision?)
We don't know how to push bad artifacts into a Merkle tree by exploiting SHA-1's weaknesses. The thing is, though, we didn't want to be pushed into scrambling for a better hash algorithm after some clever bastard works that trick out. :)
It appears that git currently has experimental, non-backward-compatible support for sha256, so I'd guess "as soon as they finish fixing any issues and figure out a nice upgrade path", with the caveat that there's little pressure because it's not actually a practical problem yet and isn't expected to be one in the foreseeable future.
When Fossil didn't have those features, the frequent complaints were that Fossil doesn't have those features, and that's why they had to keep using $INERTIA_SOURCE. Now that it has those features, it's the reason not to move to it? I see. :)
All of these features cooperate and serve the same goal: coordinate the work product of people on a project, in a distributed fashion. One path to that is the nearly fully centralized model of GitHub. Another is the VCS + mailing list + bug tracker + wiki path, which requires considerable admin resources to manage, and at the end of the day is a pile of barely-cooperating services. Fossil's path is to put them all into one place so they all work properly together.
You can reference ticket IDs from a forum post.
You can point to a section of the timeline from a wiki article.
You can create diagrams in Pikchr format that live as version-controlled text in the repo and reference them from commit messages.
You can generate HTML diffs and include them into the body of a Markdown chat posting for discussion of a proposed change before committing it.
Etc., etc. It's all communication, which you need when you have multiple people working on a project, especially across time zones.
I can't say anything to your first remark other than different people will want different things, it's unfair to level an implication of hypocrisy on the GP unless they have individually in the past actually complained about wanting to use Fossil because it lacked X but now have this complaint. Myself, I've never had a strong opinion either way, though I had only ever known Fossil for having all these extra features and thought it fascinating, not necessarily good or bad.
I did eventually try Fossil somewhat seriously for a personal project last year. I gave up and moved back to git, and don't think I'll try Fossil again, either personally or for a broader group or company project. I still find the idea of full integration interesting, just not Fossil's execution of it, not to mention some disagreements with the SCM philosophy itself. (For some it's about rebase, for me I realized I rather like the concept of staged/unstaged files or even Perforce-style pending changelists.) Meanwhile a collection of services approach actually works and cross-integrates pretty well especially when you don't have the requirement to try and self-host everything and give yourself that administrative overhead. And you'll get non-ghetto (for lack of a better descriptor) versions of those services.
Still, I don't recommend against Fossil, it's clearly good enough and aligns very well with certain values, people should evaluate it for themselves.
> Am I the only one who's actively suspicious about this kind of thing? With Git, I can use whatever wiki, ticketing, documentation, and blog features I want. I don't want my VCS to be a Lotus Notes for software development.
The forum, chat, and Pikchr features have been added since then. Indeed, everything in the Fossil ChangeLog has been added since then, because it cuts off a few months after your post: https://fossil-scm.org/home/doc/trunk/www/changes.wiki
My criticism, over a decade ago, was that Fossil had too many features that were tangential to version control. Since then, they have added more of those tangential features. So please explain to me how my complaints in 2011 are in any way inconsistent with the state of the project today. If anything, my complaints about feature creep have been vindicated by over a decade of continued feature creep.
In 2011, I wanted the flexibility to use other wiki, ticketing, documentation, and blogging systems. In 2022, I also want the ability to use other forum and chat systems. In 2033, I will probably want the ability to choose my own text editor. I have never complained that Fossil is lacking in features, especially features that are not directly part of version control.
Fossil is ideologically opposed to rebasing which makes it (according to its own docs iirc) not so good for large teams, since the repo is full of tons of useless little commits that git users squash out before merging.
I'm the type of guy that would store the Pikchrs in the version control system alongside the HTML, then write a Makefile to regenerate the SVGs referenced from the HTML every time the Pikchrs change.
But at that point, you might as well just use Fossil as a CMS. :)
No technology popular enough to attract an audience ever goes away. However, would you care to speculate on how many troff documents were written in 2021 versus Markdown docs?
If you leave off the "pikchr" tag or use something else like "PikchrSource", then the renderer doesn't trigger.
If you want to see it both ways, Fossil renders the diagrams with an Alt or Ctrl-click handler attached (depending on platform) that toggles between the SVG and the fixed-width source code view.
And if you don't like the modifier key, you can tag a diagram "pikchr toggle" to make it toggle with a simple left-click.
I daresay there is no DVCS appropriate for many large files that change. You want something more like a classic non-distributed VCS so you're not cloning the entire history of huge files.
Worst is when they're data-compressed so you're not only pulling all historical versions but also most every byte of each historical version.
I wrote an article for the Fossil docs covering this, but the core experiment can be run against any VCS:
You're running into the `/etc/sub[ug]id` defaults. The old default was to start your normal user at 100000 + 64k additional sub-IDs per user, but that changed recently when people at megacorps with hundreds of thousands of employees defined in LDAP and similar ran into ID conflict. Sub-IDs now start at 2^19 on RHEL10 for this reason.