This is weird thought where your solution to the problem is just "have the people on the team be better."
The entire phenomena exists because people that see how important these things are end up on teams led by and staffed by people who either don't see how important they are, or aren't competent enough to do the glue work.
The lack of parties and happy hours is music to my ears, personally. I can't stand these events, and they often work in the service of this strange social construct that we all know and love - "culture." At some point socializing with your teammates outside of working hours became part of the job. Its strange really, to almost force these things on people, as it doesn't even serve to strengthen the team bond that much. Soldiers don't bond at the bar, they bond on the field; they socialize at the bar.
Loneliness and isolation is often cited as a downside of remote work, but as someone who doesn't like to mix work/personal and has a healthy personal social life, I just can't see it being an issue for everyone.
Someone else can find the source, but Netflix themselves have stated that the lions share of viewing on their platform is aimed at TV shows, so it doesn't surprise me that they haven't invested in quality (and expensive) movies. Why would they if no one is watching them?
TV is pretty much in the same state on Netflix (AU at least). Search for a show and chances are it isn't there, even if it has been in the DVD bargain bin for years. Other shows disappearing (say, Downton Abbey, which was taken off just as my mother in law got to season 5). Lots of new shows a season behind, lots of newish shows never arrive.
Can anyone help me understand why we're even performing studies like this anymore? It would seem that all these years of "x can cause cancer" studies have just revealed that we generally know very little about what causes cancer. Is prevention really a viable avenue of control anymore? Shouldn't we just be focusing on early detection and treatment?
I think it comes down to performance measurement being a really weak area for many companies. No one really knows whats real when it comes to output over a 40 hour week, and so bosses are afraid of getting shafted by an employee whom they can't see toiling away at their desk.
It's all but fact that a 40 hour work week doesn't result in 40 hours of productivity. Those non productive hours are seen as acceptable in an office for some reason, but when you're at home and can fire up a video game, it's sacrilege.
The fact is that not everyone has the discipline to be productive while working remotely, and a company requires a different way of hiring, operating, and measuring to support remote work.
Coordination is easier too when projects involve the cooperation of multiple individuals to get things accomplished. Escalating to more senior people is also easier when someone can walk over and ask for help in some situations.
You are right that employers want "proof" and butts in seats is a proxy. Having everyone go remote also requires changes to process and communications to keep people in sync. It requires tools and works to get everyone on the same page and keep them there. Most companies don't want to evolve to solve this, it is easier to rent a building.
NYT is a quality journalism outlet, obviously not every article is amazing, but if enough people upvote it, it gets to the front page. It's not some big conspiracy.
A friend of mine is making thousands per month simply selling notification bot subscriptions to people who run private discords. His don't even buy, they just send stock notifications to the rooms. Madness.
As someone who used Percona in production, its fantastic for a small/medium workload, (depending on your definition of those) but it has pretty hard limits on scaling without an intermediate layer, which then mostly defeats its purpose - dead simple clustering.
I agree completely that 100k directs is bad, but they're writing from a perspective where the alternative would likely mean switching to a different product.
Outside of the scope of elaborate CI pipelines, I wonder how useful this can really be.
Big CI pipelines are one of the few instances where I can think of Bash being both an appropriate choice AND the resulting product being large, elaborate, and sensitive to failure - which would benefit from being tested. Most other applications of Bash are generally just so simple that fundamentally altering how you write scripts ("Bash scripts must also be broken down into multiple functions, which the main part of the script should call when the script is executed.") for the sake of testing them seems like it could easily fall into the category of over engineering.
Beyond the CI pipeline use case, wouldn't the tools in which this would actually be properly useful be better off written in a proper programming language?
Whenever someone says "most people don't write large Bash scripts", I have to chime in and say that I would consider most GNU/Linux distros to be giant piles of shell scripts.
A year or two ago, Arch Linux migrated the tests for dbscripts (the server-side of how package releases happen) from shUnit2 to BATS. https://git.archlinux.org/dbscripts.git/
I would consider most GNU/Linux distros to be giant piles of shell scripts.
Yes! That was one of the primary motivations for my Oil project [1]. I was building containers from scratch with shell scripts (in 2012 or so, pre-Docker), and I was horrified when I discovered how Debian actually works.
And of course it's not just Debian. Red Hat, Fedora, Alpine, etc. are all big packages of shell scripts, often mixed with other ad hoc macro processing or Makefiles. Alpine does this funny hack where their metadata is in APKBUILD shell scripts, which is limiting when you want to read metadata without executing shell.
I also point out in that post that Kubernetes is pretty new (2014) and it has 48,000 lines of shell in its repo.
That's true of most cloud infrastructure. If you deal with Heroku, OpenStack, Cloud Foundry, etc. there is a ton of shell all over the place. With buildpacks, Travis CI, etc.
And that's obviously not because the authors of those projects don't know what they're doing. Shell is still the best tool for that job (bringing up Unix systems), despite all its flaws.
The world now runs on clusters of Unix machines, and in turn big piles of shell scripts :)
> I don't know how bats works with the @test annotation? That doesn't look like valid shell syntax.
Bats runs the test files through a preprocessor[1] that, among other things, converts each `@test` case into a shell function. E.g. `@test "the doohickey should frob" { ... }` becomes `test_the_doohickey_should_frob() { ... }`.
Also, I looked at the buildpacks-ci repo you linked, and it still has ~2000 lines of shell in ~80 files.
For comparison, there's ~12,500 lines of Ruby.
Another issue I have is that if Kubernetes replaces its 44,000 lines of shell with 100,000 lines of Go (or probably more), that's a mixed result at best. There's just a lot of logic to express and shell does it concisely.
Of course, without a better shell, I don't blame them if they switch, but it's still a suboptimal state of affairs.
I agree bash is not a sane production language, and apparently so does Greg Wooledge, somebody who has maintained a popular bash FAQ for a very long time. I quoted him in my latest release announcment:
I did hear about Cloud Foundry moving from shell to Go for some things, and I also heard that Kubernetes was rewriting a lot of its shell. [1]
(1) If you had to choose between Go and bash, I can understand choosing Go. I saw some blog posts along those lines [2].
Although I don't think it's optimal. I'd be interested in seeing some of those "shell scripts in Go" if you have a link. There should be a better language for sane shell scripting (hence Oil).
(2) I imagine it's not fun to port thousands of lines of shell to Go (or Ruby) by hand. Oil is supposed to help with that via an approximate translation and good errors, but that part isn't done yet.
What IS close to done is a dialect of bash that is sane -- or CAN BE MADE sane with user feedback.
For example, in the latest release I added set -o strict-argv, and there's also set -o strict-control-flow and strict-word-eval. I will add a strict-ALL to opt into all at once, as mentioned in the release notes.
Also, OSH gives you all your parse errors at once, rather than having them blow up at runtime later:
[1] I just pulled the Kubernetes and there's 44K lines in *.sh files, as opposed to 48K lines a couple years ago. If it were proportional to the project's growth, I would have expected it to be 100K lines by now, so it seems they are indeed getting rid of shell!
However this only removes ONE LAYER of shell. Any cloud service that uses a Linux distro (which is all of them) is papering over all the nasty layers of shell underneath! I hope that Linux distros will gradually move to Oil to get rid of this legacy.
Bash is proper programming language, though not necessarily modern nor ergonomic.
The problem is:
1. Bash is assumed to be everywhere, so people use it for maximum portability or bootstrapping.
2. It started as a 10-100 line "quick" script, but then grew into a monster and nobody wanted to take the hit to rewrite it in a modern programming language.
I've found when you enforce the same software best practice requirements regardless of language, people start choosing not-Bash since "they have to do it right anyway". Many devs see Bash as a shortcut to avoiding the extra work.
The entire phenomena exists because people that see how important these things are end up on teams led by and staffed by people who either don't see how important they are, or aren't competent enough to do the glue work.