That's where I was going when I wrote this post. Logging and log analysis is a large part of the continuous delivery tooling that I've been working on for the past 7 years.
I think I took the metaphor more to heart than others. I like to think of myself as a good storyteller. I can explain pretty in depth computer science topics to my business team who lack technical knowledge pretty well through stories. My problem seems to be translating this to software structures.
I wish I could be as good of a "writer" in my software, but I am always looking to others for the best practices. In this case, I am always in a constant state of doubt and "writers block", I need to figure out how to unleash my own creativity.
Based on the other usages in the article, it looks like some kind of link syntax like in markdown or wikitext, but it isn't being parsed/replaced. If so, then it just means "counterintuitive."
LOL I am the author and I didn't know what that comment meant when I saw it above!! Derp and I will go fix my broken links now :) Thanks for the heads up!
Vernor Vinge and Charles Stross spring immediately to mind. Also James Tiptree, Jr. was a CIA analyst in the 50s and 60s so probably was a regular user of the early government mainframe systems.
ajarmst's point goes beyond click-baitiness -- indeed it is about the little quote-box on the first para rather than the title. You say: "Any software system begins as a shared narrative about a problem and the people who come together around solving that problem."
I can, in fact, bend my view of facts to fit this metaphor. And if am also dubious about doing so, that's OK as it makes it all the more likely that I will learn something new from your post.
So why spoil that with some the arrogant demand that: "If you don’t accept the above proposition completely then nothing I have to say about software is going to work for you."?
Just to advocate for the devil here, I think some advice is predicated on a shared model, between the giver and the taker, of how things work. If you don’t buy the model, then you probably won’t deeply understand or agree with the arguments that would get you from the model to the thesis.
While I appreciate the spirit of what you are saying, the average time on page of 3 minutes 45 seconds (across 2k uniques) indicates most visitors are continuing past the paragraph you claim would be problematic.
Devils advocate: Do you know what the average time is to read the entire thing is? If it's 7.5 minutes, then one possible explanation is that your audience is split (50/50). Median time would be a much better indicator.
Yep. I spend a lot of time qualifying and hedging on these ideas in the work I do for software organizations. These blog posts are my opportunity to just lay my true opinions out for people and see what happens :)
The fact you thought about it enough to write 3 paragraphs about it shows the stickiness of my click-bait technique. You may not like my post but you will remember it.
"Time grain" seems like the wrong word here I will replace it with something more specific. Thank you for pointing this out and if you still dispute my math I would consider it helpful to hear why :)
I think he's talking about the ratio of a microsecond to a half hour to express scale, a microsecond to a half hour is approximate (to an order of magnitude) to 10^9.
1000000 (microseconds/second) * 60 (sec/min) * 30 (min for a half hour) =
1800000000
Ok. I was just extrapolating out that if from a microsecond to a half hour 10^9 which is within a couple of orders of magnitude of the age of the universe in seconds, then from a microsecond to a year would result in a ratio that when expressed as an integer is much larger than the age of the universe.
I'm realizing as I type this that I am not totally clear here and I will do some reflection now as to how to rewrite this point so it does not raise such objections. Thank you again for your help with this.
But I am talking about scale. The number of different size "bricks" or "units" of time that you can choose in which to execute concurrent or dependent logic is staggering. And all too often this is overlooked. Thus "February 29 bugs" and "New Years Day" bugs, not to mention the entire error classes of race conditions and cascading failures.
Wow. Thanks for the detailed response. I will try to answer point by point.
#2 that tight coupling can ever be good comes as a shock to the learner
#3 any breakage is an opportunity learn from and about failure — also when it's broken and you know that is better than if it's broken and you don't know
Regarding regression testing: writing tests is a design activity so if it is "driven" by whatever happened to break last, that sounds like treating the symptom not the underlying cause. That is why I have never been a fan of "bug-driven development" nor of large-scale regression testing.
#20 "who draws the lines" is an incredibly important political question. So who makes the declaration that the so-called black box can not or is not to be opened? This was my point. I understand what a black box means in software testing parlance.
#51 testing is runnable documentation and sufficiently advanced monitoring, particularly ubiquitous StatsD usage, is indistinguishable in practice from testing, see https://www.youtube.com/watch?v=uSo8i1N18oc
#66 I truly do not understand the objection you are making here. If an event goes unobserved and is without impact it is the same as if the event never occurred.
#83 no bugs always cluster at the interfaces between components
#84 Uh oh. I need to check my math and will respond further in another comment once I do so. Thanks!
#93 It's not that testing doesn't help. It's that if you have X amount of testing and you add Y amount of additional testing, THAT is not correlated with better quality. Likewise if for Reasons you do less testing in the future than you do right now, that is not guaranteed to degrade your quality.
>#66 I truly do not understand the objection you are making here. If an event goes unobserved and is without impact it is the same as if the event never occurred.
#66 says: "If no one ever finds out about the bug then the bug never existed in the first place."
While the outcome is the same, it doesn't literally mean the bug never existed. The existence of a bug is orthogonal to its discovery. Its discovery does not bring about its existence.
Do you have any data for #93? I'd expect a power log distribution.
I would argue that it does but this is a matter for phenomenologists. The practical result is that it's as if the bug never existed. Beyond that let's agree to disagree.
#93 No hard data. How would you even begin to measure such a thing? No two software shops are the same, hell no two projects within the same team are anywhere similar. How to baseline? What about the impossibility of a control group for a software team?
I find it interesting that a power log distribution would result in the kind of behavior I am describing: relatively small impact for even fairly large variations in the number of tests applied to a project.
"who created it" implies a causal chain of events whereby someone is responsible for the bug. This is Safety-I thinking. John Allspaw addresses this when he states that "there is no root cause" for an incident or bug. So in a very practical sense YES the bug is "born" at the moment the user "discovers" it. Note that "discovery" here is in the imperial sense where the the user has drawn a line on the software map (so to speak) an labeled it: beyond this here be bugs. Devops very directly implies skepticism for causality as a primary/default phenomenology for understanding bugs.
I am the author of this post! If you notice inaccuracies or have thoughts, please comment here. I will be reading and responding to all comments as always.