This article implies, but doesn't come right out and say, something that I strongly suspect: that producing an accurate (or even close-to accurate) software project estimate is a time-consuming, unpredictable task. The question nobody seems to be examining is whether or not the cost of doing a large, expensive up-front analysis that might take an arbitrary amount of time is less than the cost of just doing it concurrently with the software development itself.
I wish I could find the reference, one study showed that additional time spent estimating does not improve estimate accuracy. Only historical data regarding similar tasks were where estimates could be made accurately.
Example: How long will it take you to lean advanced physics? Even if I gave you a week, your estimate is likely worthless (maybe precise, but not accurate). To me this makes sense because you cannot know the unknown unknowns until you hit them (and throw in some of the journeyman fallacy)
Meanwhile, how long will it take to brush your teeth is easy to estimate. How long will it take to brush your teeth with a new electric toothbrush is also reasonable to estimate.
The book Rapid Development is somewhat evidence-based, has a chapter on estimation and concludes the same thing. At every stage of a project, there's just some amount of inherent uncertainty and spending more effort on estimation does not remove this uncertainty -- only getting further in the implementation does.
A lot of these articles assume the work being estimated will remain relevant during the estimated timeframe. My last (very large, not FAANG) employer started every project with a hard deadline, estimates were only used for budgeting not time, and then every project changed on a daily basis until it was obvious it would never ship by the deadline which was changed at the last minute. Every single project worked like this (note these projects were for us not external clients). I know of no theory of estimation that can account for continuous change unknown at the start.
I've worked on reasonably large >$100M defense programs for the last 25 years. Nearly all of them overran both time and budget constraints. They have mostly been firm-fixed price, except for the last 5ish years, that being incentivized agile. I have seen estimation be accurate when the following criteria are met :
1) Experienced personnel are estimating the work (experienced technically and in the product being developed);
2) The work being estimated is of similar technology to existing product;
3) The time period being estimated is less that 6 months
If any of these are violated, e.g. new hires are given the estimation task, a years worth of work is being estimated, new technology is being estimated etc, i would not trust the estimate, and apply big 'ol bucket of risk money.
I actually have seen this legitimately work as well, as in being on a program where we pretty consistently hit every deadline with every planned feature at or under the promised price, for years on end.
But then take the same company, even many of the same people, and put them on a new project, and the estimates go to crap.
It really takes a lot of domain-specific experience, and I mean "domain" pretty narrowly, like knowing the code base, knowing the developers, knowing the customer, knowing the external surprises and constraints you're likely to come across because you've seen them so many times before. I've never seen it work for greenfield, even in an organization with the discipline and know how to do it correctly for their older applications they've been working on for years or decades.
In other words, "estimates" are actually just political and budgetary tools. They have no relation to real-world work. One side agrees to call them "estimates" (the product side) and the other side (the business) agrees to let deadlines inevitably slip.
Oh, people are examining that question alright. It's just complicated and therefore hard to come up with a definitive answer.
If I had to summarise what I have read on the topic, it would be that either way works, but the up-front analysis is orders of magnitude more expensive. Some sectors require this: defense, nuclear, medical, and so on. In most areas, you don't need that and can't afford it in a competitive market.
But to hint at the complexity of the question: Not only is every project different, there are also sneaky feedback loops in there.
By the time you have completed the large up-front analysis the world is likely to have moved underneath you, invalidating some of the assumptions that went into the analysis. You can shape your analysis to account for this, for sure. But at that point does it still count as one up-front analysis or several? Is it truly up-front if it leaves options and decision points open for later?
And it's a continuum: the more options and deferred decisions in the analysis, the more it starts to look like it's clearly done in parallel with the implementation. Where do we draw the line?
I think you, and the agile software industry as large, are completely mistaken.
There is absolutely no problem with intensive up-front analysis. In fact, this is what at least 80% of the effort should go into. But we have pretty much 0% staffing for that role. It's no wonder something doesn't work when no one's there to do it. Compare that to M&S in civil engineering.
Regarding the alleged downsides: Any wrong implementation will incur massive cost down the road (but that cost usually is not accounted for in a SaaS world), so you actually cannot afford to not do it. Assumptions can change, yes, but at least you make assumptions. In agile development, people simply ignore any assumptions and end up with a completely broken product that eats 80% or more of development time with maintenance. And of course, planning and analysis can involve prototypes and iterations.
I came to the conclusion that agile development just shifts the blame for bad planning from management to engineering. And I also think that the correct framework to planning and analysis is domain specific. The more constraints your domain enforces on your software, the better your analysis and estimation will be. Hence greenfield projects have the most problems.
You have me hooked! Don't you think starting (some) development in parallel with the planning is a way to reveal some of the faulty assumptions as such early on? Isn't that preferable to discover that later?
While this is likely true, the problem is W2 labor tends to be compensated on a more or less fixed price per unit of time basis, and to cover that cost, you need to bill clients roughly in proportion to total labor time spent, and most clients aren't going to purchase something for an unknown price to be named later. So you need to estimate to give them a price.
From the perspective of the seller, these estimates don't even need to be "accurate" as long as you're making many of them and the errors are symmetrically distributed. There's even math theory behind this called Fermi estimation. The problem is, from the client perspective, they can't average out the errors over many estimates if they're only buying one thing, and from the seller perspective, I don't think our errors are symmetrically distributed. Instead, we consistently underestimate how long something will take.
It's not even unique to software. There's a huge housing boom going on where I live, with condo developments all over the place, and not a single one is actually ready to open by the season their sign out front claims they'll be open. Defense acquisition. Highway construction. Everything takes longer than budget forecasts claim.
Part of the problem, especially when it comes construction and defense, is awarding contracts to the lowest bidder with effectively no punishment for exceeding the initial estimate. This pretty much guarantees underestimation, both for statistical and behavioural reasons.
I somehow don't believe the "lowest bidder" thing, otherwise everyone would just be putting out contracts for <= $1. However, IIRC from working with people in the government contracting business, the "punishment" is that you don't very easily get more money if you go over budget. If your contract is $100, you get $100 and if you need another $100 you will spend $400 on lawyers and renegotiation.
As far as I understand, the actual "punishment" is government agencies keep track of how badly particular contractors underbid and will simply add that amount to their bids when considering, so they're going for "lowest bid," but the bid they're considering isn't necessarily the bid the contractor puts forward. So theoretically, that provides an incentive to be more accurate, but it only works if there is sufficient competition in the first place and at least some of the competitors are even capable of being accurate.
I believe when awarding construction contracts, some of the deciders look at past projects, which could discount underbidding. Often, other engineers are deciding on the contracts, so they also have a general idea of how much it should cost.
The problem is that once someone gives a date, then they are judged based on that date. In some cases, the software stops making financial sense after some multiplier of this initial date.
To avoid estimation you need to avoid dates and avoid projects that have high timeline risks. This means you would need to judge on speed to completion, or better customer satisfaction for project execution.
If think about estimate as learning activity in the sense of Theory Building, then you won't really care about the estimate by itself (which is a nice by-product) but as a confirm that the task is clear and actionable. In that the real value I think.
Isn't this basically scrum? Instead of doing an estimate at the start, you determine the throughput of the team and then give an estimate based on that measurement. That's my understanding of it anyway, could be wrong on that.