One of the most annoying things to deal with in a CI workflow is flaky tests
I guess another controversial opinion: this is a problem with the idea of CI (or at least how we work with it and what we expect to get out of it), rather than the idea of randomized tests.
In what way? My personal expectation of CI is predictable, repeatable builds, that give me some assurance the software is working as designed. I also like that it forces everything to be scripted: no "only Bob knows how to build the release file."
Flaky tests are an indicator of poor code: maybe it's your actual code, maybe it's a bug in the test code, or maybe it's external dependency + lack of error handling in the test code but there's a problem somewhere.
The presence of bugs doesn't necessarily indicate useless software. If tests are failing (or flaky...), that's probably something to look at at some point, but that doesn't necessarily mean it's the highest priority to look at. In most places where CI gets deployed -- at least in commercial environments -- there seems to be a goal of making test failures a non-maskable interrupt.
I did admit this was controversial! But it fits in with a more general view that there are a lot of tools which make good servants but poor masters. Auto-builders are a good thing, partly because (as you say) they can help to clarify what is required to make a build, and partly because (especially for dependency-heavy software, which seems to be the norm nowadays) they can help catch things quickly when the dependencies shift beneath you. Making them a hard gate on releases seems a little too close to making the tooling your master, though.
(Somewhat separately, I also worry about CI acts as a hiding place for complexity. Sometimes it reaches that point where nobody knows how to make a build without the CI tool any more. Then local testing and debugging becomes difficult.)
You have a good point. I think the danger comes from the potential for abuse. Once you ship software that has failing tests, you've established a precident and are likely to face pressure to do so again even when the tests are more critical. That's why I'd resist that, at least. My experience is if there's a real problem and later everything in production is on fire, no one cares about the caveats or risks you pointed out.
I do also agree that CI can hide complexity, but that's true of any tool as well as non-CI builds. Compared to a human running things, it's at least somewhat self-documenting due to being scripted.
A valid answer might be "nowhere". Code review is a hot topic at the moment, but I'm pretty sure a lot of software I've found useful over the years was created with little or no formal review, and I think we should at least consider the possibility of leaving trustworthy developers to work without constant oversight.
A side effect of this is that if your individual productivity goes up, your real wage might go up, but if your whole industry’s productivity goes up, it’ll be mitigated by this effect.
Articles telling us to focus on team, rather than individual, productivity should probably be viewed through this lens.
There's an argument for valuing ten years of Silverlight experience -- especially if it includes some gnarly projects where the framework isn't really holding your hand that much -- over ten years of flitting between web frameworks but never really getting much beyond implementing the stuff that shows up in the tutorial.
I think "going into the weeds" (and not being afraid to do so again, in another context) has some value independently of any specific patterns or tricks that apply to a particular technology.
Over the years I’ve learned few things are more permanent than a temporary solution.
This is something your hear in a lot of places. Often (not always) it seems to be spoken with regret. I think we should be quicker to celebrate than a problem has been solved (and seems to be staying solved!) without worrying so much about whether it was done "properly".
Indeed, I think the frequent success of temporary solutions should be seen as a challenge to the prevailing orthodoxy that software developers should be striving towards more discipline and process. Instead, we should be looking seriously at why quick fixes and cowboy coding work so well, and asking whether we can apply these more widely.
In my experience, it is not generally a matter of putting some quick and dirty script in the solve a problem and it goes on solving the problem without modification indefinitely. It is more often:
1. Put a quick and dirty script in to quickly solve some immediate problem
2. When that quick and dirty script stops working, instead of putting a proper solution in place you either add another quick and dirty script in to plug the issue or you modify the original quick and dirty script in some ad-hoc way
3. Repeat over and over again
And then you end up with some impossibly complex Rube Goldberg machine.
I'd certainly agree that if a quick fix starts to accrete complexity, that's good evidence that it's time for a rethink. "ad-hoc modifications", I'm less certain about. How often is this happening? How much effort do the modifications take? Would the modifications still be needed given a "done properly" version (my experience: often yes). Would they be substantially easier?
It can go either way, and I'm certain not trying to argue that quick fixes should never be replaced -- just that trade-offs should be considered.
This can be taken to extremes. Some years ago, a popular historical novelist told me something along the lines of "I often talk to people who say 'I'm writing a historical novel', at which point I'm wondering whether this could be a potential competitor, '...but I'm still doing the research', at which point I realize that there's not a lot to worry about." Looking into background and what others have done is valuable, but it's pretty easy to let it prevent you from ever taking action yourself.
If i had to retain one thing from SICP, it would be this from the preface of the first edition:
First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
It's a good set of questions to keep in mind, but I'm not sure it always leads to a feasible course of action.
What I like is working independently on a gnarly problem for some extended period of time. That explicitly includes spending the odd day "spinning my wheels" without being immediately being pushed to ask others for help -- because bumping my head against stuff while working alone is the way I like to learn stuff.
This way of working seems to be under fairly vigorous attack in favour of "everything needs a team", and I'm not quite sure how to work around that. Just pointing out that I'm getting things done does not seem to be sufficient.
However... skimming through some of those, they actually look like pretty typical "work on a team" positions (e.g. "...technical guidance and mentoring of Computer Technicians with focus on accurate completion of work orders and execution of standard operating procedures..."), not "this cupboard has spares, this cupboard has rations, see you in 6 months, good luck!".
Any hints for finding something that's closer to the second option?
Not IT-wise, but there are still a few fire lookout/lighthouses that are manned. Look for jobs to maintain stuff like that. Rural electrical, microwave tower, pump station, pipeline survey, earthquake sensors, road snow plowing. Anywhere there are long stretches of infrastructure and low population. Live in a trailer.
Trucking industry is doing well for mobile loners.
Land surveyors in places like Alaska spend a lot of time with just their backpack, equipment and rifle. I think there are still jobs for population surveys, like counting penguins/birds.
Leidos would be another company that does Antartica, satellite grounstation stuff.
I guess another controversial opinion: this is a problem with the idea of CI (or at least how we work with it and what we expect to get out of it), rather than the idea of randomized tests.