I think the question asked was closer to "why aren't you using an artifact repository?"
Nexus is pretty good, but if the language you are using isn't integrated well with gradle/maven you can always just use a shared drive fed by jenkins builds.
> Nexus is pretty good, but if the language you are using isn't integrated well with gradle/maven you can always just use a shared drive fed by jenkins builds.
Here's where I start to have problems with contemporary development culture. You mentioned using Nexus, Gradle, Maven and Jenkins where the guy just want to keep some binaries along with the source code they're generated from.
To bring it back to the OP, this argument is in fact represented in the OP.
OP is arguing that these (having to use Nexus, Gradle, Maven, and Jenkins just to keep some binaries along with the source code they're generated from) are workarounds to limitations in git that ideally would not be there (and don't neccesarily have to be there, and aren't there in all VCS's), and the OP mentions that instead git fans want to claim "No, that's just the way git SHOULD work, you SHOULD need to go use an 'artifact repo' in addition to git to keep a few binaries with your source code" instead.
That said--and this is without knowing the exact build and tooling environment, so I may well be giving advice inappropriate to the situation at hand!--the second part of that "keep some binaries along with the source files they're generated from" is kind of an antipattern.
If it takes too long to generate them from source, each time erry time, they need to fix that issue--least of all because slow builds mean slow testing, and slow testing means no testing.
That's why I spent two days earlier this year moving a 30-minute build down to a 2.5 minute build.
The output of the build is basically a tool in itself. So most people don't need the build process, just the resulting tool. The input changes on pretty much a monthly basis and is not easily versioned. I could set up all dev machines to support the build and everybody could build it themselves from sources, but that would require me to
* install all the required instruments for the ritual rain dance
* teach the whole team how to do the ritual rain dance
* support the people that break an arm or a leg doing the ritual rain dance
So I prefer perform the dance on my machine, collect the tool and point the main dev environment to the right location. It's all scripted, so I kick off the job and grab a coffee. However, I want to keep old instances around so I can track when bugs crept in, so I can't just go overwrite the result, so I need to adjust the pointer every time. If I could just check the tool in with the regular dev setup that would be much easier, but - since we're using git - that would blow up quite quickly and overwhelm my disk space. (And folks would kill me for filling up their disks as well - rightfully so). That's something SVN or another centralized VCS would handle much more gracefully. In a gist, I have a use-case for fairly stupid versioned file storage with a push/pull api. No complicated merging, no branching, nothing. git-annex could do, but is overkill.
There are some better solutions to what we're currently doing, but there's so many yaks to shave and so few razors.
> That's something SVN or another centralized VCS would handle much more gracefully.
Subversion handles this gracefully because you don't download all of the repository's history to your machine. It's a trade-off that you're talking about here. Most people are ok with losing the ability to check in 500MB files in order to gain the decentralization of having a full copy of the repo (and not needing to query the server just to view history).
Thanks for such a good explanation. When I first saw artifacts of things generated by code in our repo, I had a big WTF moment, but it made a lot of sense once someone pointed out that it was Rather Handy for catching bugs in the code that does the generation.
> We're complicating things beyond reason nowadays.
There isn't always a way to dumb things down to the level that people would like. It would be so much easier to get to work if I could fly in a straight line between work and home, but I don't complain that the world doesn't accommodate me.
He also mentioned using a simple shared drive. It would have been difficult to know which of these solutions was the right level of complexity and capability without knowing more context (a point which he seemed to make clear).
Valid question: Because the tooling does not think in terms of artifacts that are versioned in repositories, but rather in terms of "files" that are in a given location. I'm using a shared location, but every build requires modifying another file to point to the now current version. It's all solvable, but the easiest solution would be to just version the result.
I could fix the tooling, but alas, I have other yaks to shave. It's an imperfect world.
> but every build requires modifying another file to point to the now current version
If you could version the file in git, you would have to check in the new version of the file, so it's not like you're adding a step to have to update (e.g.) a symlink.
But I have a lot of files lying in a shared location that are named build-<datetime> and I can accidentally break revisions in the repo by moving/renaming/deleting any of these. That may be a feature to some people, but that's something I consider a weak spot. It's brittle and prone to breakage and I dislike brittle.
Nexus is pretty good, but if the language you are using isn't integrated well with gradle/maven you can always just use a shared drive fed by jenkins builds.