> Engineers will become more focused and engaged, managers will become more effective and empathetic, and companies will build faster with higher quality. Engineering will rise to a whole new level.
A bit too much hyperbole for my taste, given the less than groundbreaking ideas.
I mostly agree - This doesn't seem to be adding any real visibility that velocity tracking (a la - agile) wouldn't give you already (not that I'm advocating for agile, mind you...).
Consider -
I have two teams, with the same staffing levels and the same general seniority. For this example, lets assume each is a team of 5, with 1 tech lead, 2 seniors, and 2 juniors.
Both teams have the same approximate meeting count, both work on the same stack with the same dev tools.
Team A consistently releases new features faster than team B. Why?
Because if the answer is "Find the blocker" aren't we right back at
> "your engineering leaders will simply justify failures, telling stories like "The customer didn't give us the right requirements" or "We were surprised by unexpected vacations.""
except with blockers this time?
Maybe Team A is actually just better than Team B.
Maybe Team B is actually working on a feature set that has more inherent complexity.
Maybe Team A releases faster but also has more incidents in prod.
Maybe Team B releases a larger changeset on average.
None of this is getting addressed or answered.
----
None of that is to say that measuring blockers isn't a useful idea, but it's certainly not some silver bullet.
Those blockers should be things that give you falsifiable stories, no?
So if someone says "it's because we're blocked on [slow delivery of designs from another team]" and you measure that specifically, and then improve it, and you notice the team's output hasn't changed, you've learned something.
I've certainly seen those reasons before, but haven't seen people turn them into specifically measured things versus "ok let's see if we can improve it" with often little or ineffective followup.
It helps to instrument the journey of your work from jira all the way to build, deployment, run and monitoring (observation).
From that you can get measurements on how long each stage takes and the duration of each transition.
From there you can compare team A and B. The transition times is where the human time cost usually sits.
Just getting the time when a Jira or feature is raised, to the time it is picked up to the time of the first commit to the time of the first test and final build does already give you valuable insight.
The points you raised towards the end can be answered if observability of your CI/CD pipeline is actually in place or at least a place to start a line of inquiry.
Naturally the blockers will be aggregated into some of the values but as you work through the journey, they will start clustering at certain stages and maybe highlight a significant problem that needs to be addressed.
There's a wealth of data being left on the table that can help inform management decisions.
A bit too much hyperbole for my taste, given the less than groundbreaking ideas.