Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
It took 12 weeks to ship an MVP I thought would take 3 (boxci.dev)
282 points by davnicwil on Nov 7, 2019 | hide | past | favorite | 217 comments


How did you manage to constrain the time overrun to only 4x?

As a mercenary engineer, I've had to answer the estimation question many times. Whenever I've given estimates, it has only ever shot me in the foot. Spolsky might have figured it out, but I haven't. So I try to avoid giving estimates as far as possible and instead focus on demonstrating velocity of a working system they can choose to stop funding at any time.

Some clients, especially from the construction industry, expect me to give them GANTT-chart style estimates down to the hour. Here is how I convey the software estimation problem to them:

If I write the same code twice, I done fucked up. So any code I write, is new code. If I can reuse code, I will, but that's not the code you're asking about. In your industry, you have built the same house many times before with the same team, on the same foundation. But you are employing me to design, architect and construct a new house that has never been built before. If it existed already, you would just buy it off the shelf instead of employing me.

So think of me as more of an architect than a labourer, and together with you as the client, we are embarking on a design that you're not quite sure you want yet, that won't collapse on a potentially unstable foundation.


Having done hundreds of small to mid sized projects I take comfort in the fact estimating is a very distinct task that even the best of developers learn last.

I've got three tricks in my pocket that at least help me with this:

The first is that I go by the formula mentioned in "The Mythical Man-Month" — which is that the effort in larger projects is distributed as follows:

1/3 planning 1/6 coding 1/4 component test and early system test 1/4 system test, all components in hand

Developers tend to mainly estimate their coding time, which leads exactly to the effects mentioned. It also explains all the annoying situations with all the holdups once the project is "almost done", this is simply a systems test and will add another 25%.

The other is that I simply add 5-10% on top for each person involved into the project (dev, stakeholder, anything) for communication overhead. It usually holds true.

The third trick is meticuluous focus on lean delivery, which involves scoping down, early testing on as much as possible, hidden and pre-deadlines and setting tractable sub-milestones.

Unfortunately, all of these techniques don't accelerate the project, they just help you be more realistic. If you find something that accelerates complex projects, let me know ;-)


Really insightful, thank you.

Also regarding: > If you find something that accelerates complex projects, let me know

When possible cut scope and split the project in smaller projects with real deployments (much like you said with "The third trick is meticuluous focus on lean delivery, which involves scoping down, early testing on as much as possible, hidden and pre-deadlines and setting tractable sub-milestones.").

In nearly all my projects I now propose something like this to clients: "Ok we understand your needs and requirements. What do you think about having a V1 with only these core features deployed in a few weeks (which ideally already provide business value), and after that we iteratively improve it?". Works great for smoothly throwing away all the nice to have.


> If you find something that accelerates complex projects, let me know

Developer experience. I've been working as a developer for a living for 18 years & I thought I was pretty hot-shit 10 years ago already, but I can probably do more today in 4 days than I used to in 4 weeks (or 4 months a few years before that).


This holds very true up to the architect level.


As learning curves generally flatten out, that is to be expected.


I've switched sub-fields of development a few times during my career (scientific computing, back-end web, front-end web, desktop applications, embedded, games [various platforms - native desktop/mobile/consoles, html5, flash, etc]) and I found that helped me keep the learning curve steep.


Don't forget the sanity check at the end. If other people also add a lot safety hours then you will end up with an inflated estimation that is hard to sell.


>>If you find something that accelerates complex projects, let me know ;-)

I know this was partly joking but there are two real ways:

Knowing your true critical path and proactively re-plan and manage the risk of those activities that sit on it to clear the way for it to run smoothly... that's of course assuming you planned and estimated it correctly in the first place.

If you have a real schedule that is properly resource and risk loaded, run it through a monte carlo sensitivity analysis and see if you can re-plan those highly sensitive activities or de-risk them.


One of my favorite shows is Grand Designs.

Building a house is probably the most understood human activities; as you stated, you can take a standard design on a normal building lot and throw up a house in a very predictable amount of time. If you've done it enough, you can write down a schedule of when every piece of material and every trade needs to be on site.

Grand Designs always involves some combination of unusual design, unusual sites, and unusual techniques, and frequently inexperienced builders. People set out to accomplish something grand, and usually succeed, but it takes much longer and more money than expected, and especially more than constructing a conventional house on a conventional lot.

Now, besides echoing your point that many novel software projects are much more akin to a Grand Designs house than a cookie cutter ranch in a Midwestern suburb, in Grand Designs one concept that is proved true time and again is that once you are out of the ground -- once you have your foundation in, your gas and water and sewer and electric hookups -- the hard part is over and the job will proceed much more smoothly. It's hard to estimate before you get out of the ground, because the ground is filled with unknowns and unforseen problems. You might still misjudge how long it will take you to finish, particularly the finicky details that always take forever, but this is purely a failure of estimation; an experienced builder could predict the time to spackle, trim, paint, install flooring and built-ins accurately, you just didn't realize it. But under the ground, there are all sorts of problems with rocks and water tables and pipes and whatnot you can't predict.

Do we, as software engineers, have a similar inflection point in our software development? Do we ever get "out of the ground", to the point that most of the unknowns are known and we have a predictable (if still great and difficult) amount of work remaining?


This is a good analogy, and when you're bootstrapping a product somehow it's extended further because in addition to being the architect you're also taking on the role of town planner, deciding where it gets built and thinking about if it should be built at all (do people need it).


Town planner or city planner is a great analogy! The role conveys the uncertainty of future growth and anticipation of problems much better than an architect, who typically only does a once-off design. Thanks for that :).


For me Joel was spot on in my experience, and I disagree, you are always writing the same code again and again. Just tweaking it a bit to fit the requirements.

An easy example is the thousands of JSON APIs you can integrate these days, where the code to do that is almost identical, but not quite. Generally the only complications are in what stupid authentication and pagination method this API came up with.

But it happens everywhere in coding. Every system might need user management, but slightttttly different each time. Or sending emails but Bob already uses Sendgrid, where traditionally you've used mailgun.

Also the software industry changes all the time. That email code you wrote ten years ago using SMTP won't be reusable. The invoicing system you built for one client using PHP and jQuery 1.7 in 2012? Not appropriate to sell to a client today.

Basically you should have written things at least very similar to what you're estimating on. Then you need to find your "multiplier". The figure you need to increase your gut estimate by to get the real estimate.

Plus, there's a lot of legal complication about reusing code with different clients, often you should be rewriting purely to be safe for them and you from a copyright perspective.


In my experience working in a broad range of fields in IT; web development is quite unlike any other type of development when it comes to code reuse and framework churn.


IME, web frontend might be particularly repeatable once you drop the artistics. But when I was doing desktop development, beyond simple UI issues, almost all work I had was unique and dictated by customer's ever changing and ever weirder requirements. In backend work, similarly, pretty much everything I write is a new and unique flower on a unkempt bush of the overall architecture.


Not really, I've built all sorts of stuff, but ultimately when you're hooking together a client's accountancy SaaS to their order system, it's going to be roughly the same code and take the same time as the other umpteen times you've done it for other clients. But it'll be a little different as they're using Xero instead of Quickbooks which you used last, and they have a different order entity design because of X.

I really feel like anyone arguing otherwise is slightly delusional. I find it particularly funny the guy two up claiming frontend might be reusable code, but backend isn't.

Once you've done one invoicing/emailing/workflow/search system/time tracking/credit control/custom form designer/forecasting module/cash flow/etc./etc./etc. module for a client, you've pretty much done 'em all.

And after a certain point you can then start to estimate the complexity of other systems with a small amount of talking to the client and research that let's you know the scope and the gotchas.


I'm a strong proponent of learning formal and informal estimation methods - I'd start with Steve McConnell's book "Software Estimation: Demystifying the Black Art." For estimates that I would put money on, I will estimate in 4 hour blocks and get features down to the most granular level. If I don't understand it at that level, I don't understand it well enough to estimate yet.


This is a great analogy on why it's so difficult to estimate: https://www.quora.com/Engineering-Management/Why-are-softwar...

Estimating should be fun, plus there are various ways, such as estimating as a team (majority wins), t-shirt sizes, etc


probably it's because developers tend to estimate the technical part first and foremost. then comes prouctization, then comes integration.

if the estimate is consistently between 3x and 6x, that's what's likely missing.


I can’t accurately estimate non-trivial software. I’ve never met anybody who can accurately estimate non-trivial software. I’ve never seen anybody claim to be able to accurately estimate non-trivial software. Yet for some reason people still insist that estimating non-trivial software is not only possible, but trivial.


I feel that to get accurate estimates you need the same people using the same tech stacks producing roughly the same solutions week in and week out. I imagine something like making slightly interactive marketing sites using Ruby on Rails every time, Active record every time, etc. These estimates will be perfect.

On the other hand, for a complex SaaS back end and front end doing something unique, a complex system to which you are being asked to add something groundbreaking, where you need to pay off various tech debt as you go in order to get the job done, where you are negotiating the requirements as you go along, and finding dead ends and impossible things that you can't do that you thought were simple and vice versa - I think it is hard to estimate. I think in that case estimation is a waste of time and it should be thought of like a "value train". If there is a genuine deadline (rocket launch) then best to manage it well with appropriate large buffers and dependency management. I don't see this being done anywhere, it is usually the business trying to haggle down the engineers on their estimates (even though the engineers are paid a salary so ???).


The issue with this is that if you’re doing cookie cutter websites. At some point it’s going to be more efficient to abstract out the shared functionality and build a framework of some sort. You’ll become more efficient but you’ll either lose predictability or you’ll just be padding estimates to whatever you were estimating before. Not a bad position to be in but you’ll always be somewhere between predictable and inefficient or efficient and unpredictable.


>Yet for some reason people still insist that estimating non-trivial software is not only possible, but trivial.

It's always maddening to get into debates about this stuff with project managers who believe this nonsense. I know the Fibonacci point scale is supposed to address this by giving PMs something, but I've only ever seen it turn into time-estimates-by-proxy.

These days, I only ever give estimates in terms of time scale. A task will take "hours", "days", "weeks", or "months". As long as it's within my power, my team will only ever estimate on that scale.


This 100% this. I run my teams via "Kanban" (I don't know how kanban it actually is, but that's what I call it).

The only question I ask around prioritization is "Is it worth doing?" If the answer is yes, I ask the stakeholder if they care how long it takes. If they do, I ask for a range and give a best estimate if it's achievable. If the answer is no, I ask why they care how long it takes if it's the most important thing to do.

My engineers know that they're expected to do that one project to completion, and we loop in other stakeholders (marketing, sales, QA, etc) progressively as we approach completion. Admittedly I work at a company of 20, but it works remarkably well, despite the absence of a "schedule"


> If the answer is no, I ask why they care how long it takes if it's the most important thing to do.

Well... would you rather get $150,000 two years from now, or $90,000 next month?

You generally don't want to be making any decisions off ordinal comparisons.


You meet the PM/PO. They want to do Scrum, standups, retros, grooming, poker planning the whole thing.

Ask them to prioritize the tickets. Maybe they want some rough estimate to decide between some of them, maybe not (which is, suprisingly, in most case). During the same meeting you can groom the tickets.

The developers then start working on those tasks, when they are ready you release them according to your company policy and calendar.

By then everyone have forgotten about Scrum, management is amazed you release so much stuff, you can probably put all that in a one hour max weekly team meeting and everyone is happy.

I'm pretty sure that's applicable in a lot of pure player tech teams and quite scalable (I managed up to a 15 devs team this way but PR validation etc overtook the management part a bit much).


> don't know how kanban it actually is, but that's what I call it

We also work with a Kanban-like methodology. Basically a pull system with a hard limit on WIP.


Wow, I wish the Fortune 10 company I work for had this kind of insight! The "agile" process I work under is causing horrible outcomes. It's basically a sweat shop operation is what it really is. I'm a tech lead and am always fielding questions like "why did this take so long?" It took so long because there was unexpected difficult A, B, and C, two of the four guys on team are mid-level devs and not senior level, and so I worked a third night this week to try and pull this off. The sprint plan never makes it past 1-2 days each sprint and then I catch a bunch of righteous indignation with phrases like "you committed to this" and crap like that.

"Why can't we better estimate these things?" Because when I give honest estimates I get threatened by my boss. Your process is not reality driven.

"Can we break this work down into smaller stories?" No, I can't break down the final integration of all the parts into a smaller story. Or more commonly, they told me we have "too many stories" and can't keep track of them all when we break them apart.

The best is when a Product Owner or other business type says "I'm not a technical person." Then why the hell are you working in a such technical business and calling the shots on stuff you don't even understand?


Most maddening thing I've seen is PM translating agile points to man-hours because that's what upper management requests.

Same people: you must complete assignments in the estimated time, or else, be prepared to work overtime including weekends.

It drives me bonkers.


We have a great labor market. Don't be shy about quitting when things get cray-cray!


By we do you mean software engineering in general or bay area? Asking because most people commenting on hn tend to assume the entire universe is just the sf bay area.


Even non-bay area SWE in general is still much better than most other job types and probably much better when it comes to having a nice place to live anyway.


I'd say non-BayArea are relatively in the best position, because remote work always gives a chance for salaries much higher than local to you, while unlike the SF, costs of living won't eat most of what you earn.


Quarterly planning is equally frustrating. I don't think we've ever had a quarter where we're on track three sprints in. Something always comes up, we pivot a bit, or our estimates were as uninformative as we all thought they were.


Some people really do need to have the difference between an estimate and a commitment explained to them very carefully.


I had that too. Also they were negotiating down every last estimate while we were doing "planning poker" which was only a façade and was really us committing to unrealistic man-hours estimates unwillingly.


Or you get someone very senior in the company who rocks up in your planning meeting demanding to know the “real estimate”.


The real estimate is ‘as long as it takes’.


Similar but I’ll ask for a range which has a built in confidence level. A lot of developers are very reluctant to give estimates for various reasons but if the reason is that they don’t know where to start, I’ll do a thought experiment with them.

I ask first will the task take 20 years to finish and they usually laugh and say of course not. Then I ask what about 20 seconds and they laugh again. We keep going on both sides of the boundary until they stop laughing and it starts sounding more reasonable.

Then I have something to work with like 3 to 6 months which gives me more options. I can accept the risk or I can take other steps to reduce it or break it into more manageable bites. It’s not perfect but it’s worked pretty well.


The trick is to break it down until it’s all trivial. That process takes time itself, but you can estimate that much more easily. I’ve recently found I end up giving an estimate of, say, 2 days to get a solid estimate, and then come away from that with 2-5 weeks of tasks that are no more than a day each. Estimates of a day are pretty accurate (for me).

Another approach I had some success with was estimating the 80% likelihood case, I.e. I’m 80% sure I can finish this in the time I’m estimating. Less accurate, but much more predictably on schedule, which is more useful for many stakeholders.

A last one is giving rough estimates and being clear about how rough they are and how much time it will take to get more confidence. I was handed a 50 page PDF of API docs for a service and asked what the estimate was for integration. After 15 minutes I said “1-4 weeks”, if you want more accuracy then I’ll need a few days. The answer was “no problem, we wouldn’t consider it unless it was < 1 week”.

These examples are all on the weeks to low number of months scale, but the same applies further up. Having a good understanding of the codebase and domain are very useful.

Estimating isn’t easy, but treat it like any skill, it can be improved. It requires some flexibility from product managers, but open communication goes a really long way and results in better decisions overall, less wasted effort.


This is better than not breaking it down, but it's still not very good. McConnell advocates this approach strongly in his book on software estimation[1], and I've seen it work to some extent. It works for reasonably repeatable projects that are very similar to earlier projects you have experience with. But when we get into seriously non-trivial projects that are more "R" than "D"... well, it just isn't enough.

The problem is that on a non-trivial project, these breakdowns are guesses which carry considerable uncertainty. And the less you know, the more the unknown-unknowns get you. If you write a detailed breakdown of a major piece of software before writing a line of code, at the end of the project you will look back and laugh and laugh at your own innocence.

As just one random top-of-mind example, consider Carmack writing Doom's rendering engine[2]. He tried several approaches that didn't work before striking the right balance with pre-computed BSPs. Some of the things he tried were fundamentally good ideas and would later be used for different games - but didn't work on the hardware of the times. How do you estimate something like that? "I'm going to spend a month on my first approach. That could be one month or two. If that doesn't work, (50% chance), I'll read research papers and textbooks (1 week - 3 months) until I find a promising technique. Then I try the most promising of these; since this is an approach I haven't even read about yet, it could be one week to implement, or six months. And there's some chance that will fail to and I'll have to do it again." The final estimate is anywhere from one month to a year. You just can't know.

[1]: https://www.amazon.com/Software-Estimation-Demystifying-Deve...

[2]: https://twobithistory.org/2019/11/06/doom-bsp.html


I have that now in a project. I have two things that need t o be made where I just don't know if I'm going to get them to work well within the larger design I have in mind. I wanted to prototype them in the summer but the backend wasn't ready so I only had very small mock data served in an unrealistic way.

The only thing I can do is move them to the front of the project as much as possible, so we have the risky bits as early as possible. Once we get them to work the rest should be smooth sailing, but I'd rather fail when that time wasn't spent yet. PM agrees.


That’s why you do a time boxed spike, ie a fixed amount of time to do the investigation so you can work out your preferred approach. After you have sufficient knowledge then you can estimate.


Some things can't really be broken down into it's trivialities unless you already know the answer.

E.g. I worked on a warehouse app before. None of us ever had before.

If you have no experience with any of this, how do you break down picking: SKU design (ours weren't just random numbers), barcode printers, and barcode scanners to be used in your software project? That part alone took us about 7 weeks. Including a complete SKU redesign because our initial test hardware worked better than some of the stuff we got later.

Nevermind all the hardware integrations, which was also new to us: multiple printers (barcode label, shipping label, pick list, packing slip), barcode scanner, a scale, all seamlessly integrated into the warehouse application.

This also reminds me, one of the services we decided to use had a serious (for us) bug that we didn't find out about until late in the game. They didn't fix it for 3 months and it blocked us for a while. So yeah, you could be very well end up integrating more than once. You can't really break that down into trivial parts.


I do something similar to the OP when working on established systems.

The trick is to treat everything that you can't break down to a triviality as an area of research until you've solved that problem.

So, say, you plan three days in a sprint to research a certain requirement, the results being a set of small east-to-estimate features and maybe more tricky features.

If you're working in an area where your team have absolutely 0 experience then there's no way you can estimate anything accurately. In these cases I hope that you're working with a small team (2-3 developers and a BA/PO) who are highly experienced and work well together. Then you should run flexibly with features. Work Kanban style and implement the 80/20 effort/win features - and don't be scared to drop a feature if it's looking like it's not part of the 20% effort group of features.


The biggest problem I've found with this approach is that it drives the prioritization process towards smaller and smaller pieces of work, because those can be accurately broken down and estimated. This means larger, but proportionally much more valuable, pieces of work do not get picked up.


Hmm, I disagree. The larger and valuable pieces are broken down into quantifiable parts, but are still being done. Less unpredictability, and that’s the point.


I can only speak to my experience. The friction involved in slicing those pieces of work ends up creating a sizable force. I'd love to hear more about how you've handled it well.


Well that force is the design effort. “How do I break this up in parts?” “I need to test this”, “I need to isolate that or it’s not manageable”. Otherwise you’re just cowboy-coding, shaving yaks along the way to implementing the Epic Goal, leaving a trail of legacy


Again, this is just my own experience, but what I've found is that, once you break the valuable and large piece of work down into small pieces, those small pieces don't get prioritized, because each piece on its own isn't tremendously valuable, so they get outcompeted by other small pieces of work that deliver their own standalone value earlier on. I think the best answer to this is to advocate individually for large work of this type, but I would rather be able to work it into a repeatable process.


Breaking it down until it's all trivial is the most non-trivial part of the exercise.

If everyone could do that trivially, then there would never be any problems.


Breaking tasks down is like writing another program. Plenty of tasks branch off based on their result; you end up building a task DAG and not a task list. You could try to estimate that with Monte Carlo, but in reality, you won't get the correct shape of the DAG up front anyway.


And the most time-consuming.


You cannot, by definition, break down a non-trivial project into trivial parts without having at least 1 non-trivial part in there somewhere


The non-trivial part might be combining the trivial parts. :)


This is a good point and something probably not caught in the estimation of the smaller trivial parts.


I think most software falls into this category these days. The majority of software based companies are automating processes that humans can or could do manually - they're just being integrators, linking things and tasks together in useful ways.

I don't have a statistic, but I'd be willing to bet the majority of code in existence is not algorithmic in nature.


I agree. I've been writing code for 20+ years. It's all been BPA code. Only once, very early did I do any complex algo.


BPA?


Business Process Automation


Right, and it will take potentially unbounded time.


Think about how variance propagates through that chain of work though,especially for tasks that are contingent on other tasks...


This is underappreciated. My experience on a carefully-estimated software project was that the median task would come in 20% under budget. However, we still ended up over budget.

Every iteration there'd be one or two small tasks that blew up to >1000% their original estimate.


I've estimated dozens of projects over the years and in my experience the more you split it up the more the total estimate is going to be. You can even use this a tool to get more budget/time allocated to your project. Break it up as much as you can and no task is ever going to be smaller than 1 hours, but in reality you can do 10 of those tasks per day because if you bunch those 10 tasks together people will estimate 5 hours of work. But now you have suddenly got 10 hours to do the same work!


I'd extend this by aiming to have each trivial step be a useful result that can be immediately released to market. If you can't make every step useful, try to get as many as possible useful with a minimum number of steps between useful results. Put a hard estimate on the first few trivial steps, but recognise that it's largely guesswork beyond that. If the trivial product can be released and start earning money then that reduces the pressure for accurate longer term estimates, as the earnings can fund the uncertainty. Recognise that this is an unachievable ideal, but it is at least worth striving for since even an imperfect attempt will deliver some benefit.


The problem is that in many business domains, the absolute minimal viable product, or even a customer-worthy demo, is a 6 month long, 10-15 person project. If the market already has complete solutions available, releasing an app which does 5% of what other apps do + 1% that's different is simply not viable, and is likely to help you never get a second look.


Well, yes, kinda, sorta. If you go into a market that already has a complete solution available, and that solution works well, you need to be at least as good as competitors.

But the way you go into a market is by figuring out what doesn't work. What is an unaddressed pain point. This is a hard thing to do by definition (that's why the payout of being successful at it is so big), but if you manage to do that, you don't need 10-15 persons for an MVP... in fact I'd argue you actually can't truly do an MVP if you have a 15-persons team. You need the large team to expand the MVP, but to identify it/ have a customer-worthy demo, the 15 people will just get in your way.


The view in my company at least is that even if you have a strong differentiator feature, if you don't have more or less all of the baseline features that all the established products are doing well, people will take a look at your MVP because they like your killer feature, try to use it, find out that it doesn't do all the things they need, and never look at it again, even if you do fill those features later on.


That can't be true or no startup would ever get off the ground. Just look at any Adobe competitor and see if they started fully featured (most actually brag about their lack of features compared with Adobe products)

The more likely explanation is that your "strong differentiator feature" is not really as strong as you'd wish it was.


Sounds like "how to build a successful business".


If you can break them down to such a lower granularity, doesn't that imply the software is trivial? To take an extreme example, how do you break down the software developed by Waymo?


If you can break them down to such a lower granularity, doesn't that imply the software is trivial?

No it doesn't imply this. One granular piece of Waymo's software is "develop a component that understands the external environment as well or better than a human". It's easy to say that, and it's even relatively easy to break that down into risk scenarios etc.

But developing it is.. non trivial. And there are parts in it which it was unclear if they were possible.

But that is because large parts of Waymo were a research project, not software engineering.


Right, I'm not saying they can't be broken down somewhat, but the parent comment said he broke then down until the tasks are trivial. I don't think that example is trivial! Or that it can be estimated to take up about a day.

Regardig the "research" vs "engineering", I don't think you can cleanly separate the two; many otherwise straightforward projects include a research component, even if it is just about using a new browser feature in a novel way, or some such.


For me, research is something where is is unknown if something is possible. Engineering is where we know it is possible, even if it isn't clear how long it will take.


Under that definition, very little is research. Warp drives is research. Curing cancer is engineering.

Obviously, you have implicit constraints in mind. "Is something possible given $constraints?". And the set of those constraints is where the line between research and engineering starts to blur. If you set $constraints to "in time and under budget for the project", which seems like a very reasonable value, then it turns out that a lot of programming tasks become research work, unless you're only churning out cookie-cutter websites.


We don't know if (all) cancer can be cured at all.

And no, "in time and budget" doesn't suddenly make something into a research project.

Speeding up some technical process might be research (eg, the old 1TB sorting benchmarks drove research).

But project constraints, and the fact that the team themself has never used a specific technology or whatever doesn't turn it into research. Yes, I know that's a term people use, but it's a different thing to scientific research.


I think this is due to people adhering to a bad implicit analogy: "Estimating software construction projects must be like estimating other types of construction projects, e.g. bridges or cars." The problem is that most software projects don't have a myriad near-identical existing solutions from which to source a project's duration. We aren't making a building with the exact same floor plan, using the exact same crew size with the exact same skills, in the exact same regions, using...You get the point. It's just a bad comparison.

Most software projects are more like R&D than construction. For the adventurous analogy, software is like hiking an unknown trail. You might generally know where you're going, but you can't foresee the river, detours, and feral hogs you'll have to battle to reach the end.

You simply can't estimate software the way you can more tangible products. You can do Scrum planning poker and pretend your're not playing Numberwang, if that makes you feel more in control.


Construction projects have detailed blueprints used to construct the estimate. Developing the blueprints is much more like developing software and probably has similar issues with estimating the time it takes, effort, etc. But nobody ever compares that. There is no construction phase in software (except compiling); the entire project is the planning stage.


I also think this comparison is made by people that have never been involved in construction projects. Even something reasonably small in scope like rebuilding a house is still an exploratory process. Redoing ours we found the frame was no good and had to go back to get planning permission to build a new top structure. Even now we're getting our lower outer wall repaired (basically the only part of the structure that's actually original now) and the initial cleaning showed there was more work involved. So we renegotiated the cost with the builder.

Then you look at big civil engineering projects that seem to always be over-time and over-budget.

The treatment of estimates in project management for software is way more like the assumption we have an assembly line where even if there are issues these are routine enough that they can be padded in. The problem is that software is an exploratory process and not an assembly line.

It's even worse the more creative the project is. I've worked in games for fifteen years and I don't think it's possible to estimate when a game will be 'good'. Making one is a process that requires a lot of feedback, iteration and scope management. It's part of why we see so much reflexive crunch as the reality of making a game hits poor project management strategy based on this assembly line thinking.

The words Gantt chart make me shudder.


That's why I think a more correct analogy is that programming is making the blueprints; the blueprint is the source code, and construction is what compilers do.


I'm reminded of the book "The Hard Thing About Hard Things"... and even more, of this also-classic post: https://www.quora.com/Why-are-software-development-task-esti...


I even think that the construction analogy does not hold merrit: I have never heard of a really big project that was a) on time b) on budget.

I think those projects might be best compared to the software projects we think of here...


I can. Do your estimation normally. If you've done it a dozen times multiply by sqrt(pi), if you've done it a couple of times multiply by pi, if you've never done it before multiply by pi^2.

I'm not off by more than 20%.


How can you have an accurate estimate when you're multiplying it by 10? A small error in your original estimate gets multiplied by 10. If I'm off by 10 days in the original estimate, that means I'm off by 100 using your method.

Also, I think by definition, a non-trivial project obviously hasn't been done by you many times before. Doing something you've done a dozen times is trivial in this sense.


Your heuristic that someone who has done a dozen previous software estimates will not take more than 177% of their current estimate to finish a “non-trivial software” project seems wildly optimistic to me.

Consider that many software projects led by experienced software engineers go so far over time that they are never finished.


No, that's category 3, pi^2, or roughly 10 times the estimate.

Category 1 is

>I've written a script to get the data from tables 1 to 24 of this database, now I need to write one for table 25. It should take me an hour.

>Oh look, it took me nearly 2 hours because I forgot that table is owned by a different user to all the other ones and I didn't have the right permissions.


So to clarify: what the top-level poster here is calling “non-trivial” you are calling “category 3”, and heuristically estimating that it will take 10x longer than anticipated.

But this (a) is still quite optimistic, and (b) ignores variance. The potential time a new research project could take is unbounded.


Yup, padding estimates is akin to what you might do if you had a repeatable process but there was some predictable variance (e.g. illness, machinery maintenance, holidays) rather than a bunch of unknown and exploratory problem solving.


You're almost never wandering into complete unknown. Your original estimate already encompasses your uncertainty; padding it protects you from optimism.


Right but the padding is basically an estimate of how certain you are about your uncertainty. So not only does the original estimate not encompass your own uncertainty but it's completely arbitrarily scaled by multiples of the original estimate. Which is rather to say the original estimate is completely useless. Not to mention the OP is never more than 1/5 of this greatly inflated estimate off!!!

Whereas padding in a more assembly line like situation can actually be measured and forecast from historical data.


I do it well, but only after I completely understand the problem. For instance, I wrote out a series of design documents, then gathered buy in for management to start hiring. I told them that this will take 2 years to complete, but we can start realizing value in the first six months. Every timeframe was hit without crunch, and everything landed roughly on schedule.

The problem is that most people I've met dont really understand what they are actually doing and/or have the leadership to grow people to achieve.

It is not trivial in the least and requires a great deal of lead time to even understand the problem(s). This field is at its infancy, and I dont yet know if my insights can be replicated yet.

I'm thinking of writing a book where I illustrate thought exercises and career challenges that help people grow from junior to senior principal, but it is proving difficult.


A 2-year horizon -- even for massive, strategically sound projects -- is a non-starter, simply untenable in virtually every situation I've encountered in my 21-yr career in software dev and mgmt and consulting. Not saying you're wrong to think and advise in those terms, just that the opportunity to do so is exceedingly rare. Big public companies tend to be overly constrained by quarterly myopia, and startups usually don't have runway to attempt to plan that far out. Personally, I once endured a project being shelved after 15+ months of solid, productive effort. I've also been the catalyst, at least twice, for adoption of paradigm shifts that ultimately spanned ~2yr timeframes (one was performance as a first-class citizen, and another was incorporating RWD (responsive web design) into a large company's MO. These successful cross-cutting / interdepartmental changes took time, but I don't think either of them could have happened if they'd ever been pitched up front as multi-year projects. Maybe you've just been luckier finding far-sighted decision-makers.

(If I seem bitter, I'm not. Just sharing my lived experience.)


I think it depends on scale and the inertia involved, and my original point was that accurate estimation is possible.

There are many other challenges as you point out.

I am currently on year two of a five year project. The key (and part of the essential difficulty) is finding quarterly or six month valuable deliverables that help ease anxiety of everyone involved.

It is not easy, but it is possible.

While all this is going on, I'm also working on a design document for a potential 10 year project. I doubt that I will pull this one off since I'm having difficulty finding those quarterly deliverables, but I'll keep grinding away.


Right on. Good luck!


Isn’t The hard part understanding the problem and writing the design documents? That is the task which takes the unknown amount of time. Once the roadmap is in place and people are assigned to build the things that have been designed the problems that occur are much more concrete. Unless there is some oversight or omission in the design, change in the requirements. Etc.

How long was the discovery and design document phase on the 2 year project?


For every project I work on, I make sure I understand the problem, define the requirements, design the solution, and validate any necessary design patterns before I begin planning the implementation. How long that takes depends on the scope of the project, but I have never seen a project save time by skipping that bit. I also end up being pretty accurate with my time estimates.


What I have seen several times with that approach is a wonderful plan that needs to get scrapped about half-way through when the requirements suddenly change, for various market-related reasons.

Some sketch of the end-design is always important, thinking about major components and future evolution, but a detailed plan has never been worth it in my line of work.


If you’re making radical changes to your design half way through, then you didn’t understand the problem you were trying to solve to begin with, and possibly didn’t define your requirements properly either. If small changes to your requirements mean you need to do significant redesign, then you didn’t design your solution properly.

The most generous way to view what you described is that you’ve had to cancel your project half way through and start a new one. There’s nothing you can do to make business decisions like that work smoothly. In my experience though, a more likely cause of what you’ve described, is that the project was just planned poorly to begin with.


The problem I was alluding to is exactly one of requirements - requirements aren't always exact, and they may change in unexpected ways as time goes on. In my experience, this is relatively common when building products that have a long time to market: since there are no hard requirements to begin with, just guesses on what features would be useful to have, it is easy for different marketing and product people to have different opinions and change their minds on what is or isn't an important feature.


What your describing is exactly the problem that decent planning solves. I can only imagine two possible reasons for the type of pivot you’re talking about. Either the problem you’re trying to solve has changed (not very likely), or your understanding of the problem has. You can avoid that, and all of the time you’ll waste, but simply investing the effort required to properly understand the problem up front. This is the reason I suspect that many successful founders are solving their own problems. They have some experience in some sector, know what problems it has, and bring a solution to market.


To give some more context, I'm working in a medium-size, well-established company in its field.

The kind of problem we have when launching a new product is that we can have a well-defined list of everything that is required of the new product to be assured of successf - it's going to take 5 years to build that, with at least 20 people.

What can we cut and still have a successful product, that we can launch in say 1 year (or 6 months) with say 10 people? That is much harder to say, and different product leaders have different opinions, based on discussions with different clients. And the consensus opinion may simply change.

So sure, the architecture is generally driven towards that 5 year product idea, but the short term estimates are always going to be geared towards a specific subset of that. And that subset is easily subject to change, from one quarter to another. We won't throw away the design if that happens, but we will definitely throw away any longer-term estimates.

And charting out a 5-year plan that we would only 're-jig' as priorities change would be a recipe for certain failure - teams change, people leave, etc. You can't hope to have an accurate estimate on that time horizon.


Without trying to sound rude, this is just your organisation being bad at business.

> a well-defined list of everything that is required of the new product to be assured of success

Only bad software tries to be everything to everybody (not to shit on bad software, it’s a multi-billion dollar industry). Your customers have a problem that other products weren’t solving, your solution to this problem is the core of your value proposition. It’s the only thing you need to implement to launch your product, everything else can be added iteratively over time. If you don’t know what that problem is, then you’ve failed before you’ve even started. You’re not driving your company forward by writing code as soon as possible, you’re just doing skids in the parking lot.

I’ve seen so many people get sucked into the fiction that their product is by necessity too complex to plan and design properly. In reality I’ve never seen it to be anything other than a cover for not actually knowing how to run a product well. You’re not saving any time by making it up as you go. All of the design decisions you need to make still need to be made, you can choose to make them in a controlled manner, or just do it randomly and hope you pull through. But good design is not an emergent outcome of the resources you devote to implementation.


A key element in inexact requirements is usually the human surprise element. A key to some of my success is a bias on infrastructure which is more about physics rather than messy human clients. Physics is more predictable, and the messy human stuff I have to deal with is whether or not my solutions are palatable by developers.


That is the hard part, but this is why it requires an inordinate amount of time combined with short term tactical work to keep things running.

Most work can fit within quarters and get executed quickly, but it takes something else to have long term vision.

The discovery process was paying attention and looking at what is actually the goal, and it generally requires the discipline to slow down and focus on the future without giving into the short term time sinks.

I had the benefit of 70-100 minutes per day of being stuck on a boat commuting for years to focus on discovery. The particular project that had a two year vision had about six months of shit to eat in office while thinking and planning on the commuter boat. It is amazing what one can accomplish by simply writing and the time to focus.


Completely off topic, but somehow I find a daily commute via boat seem like a nice and cosy way of going to work. Where did you go from and to, if you mind sharing?


I used to commute from Bainbridge Island to Seattle. It's not bad for a couple of years, but it starts to get old and isolating. It was very effective for some growth that I went through, but became an annoyance as my designs and ideas began to vastly overwhelm my ability to execute and lead a number of teams to execute.


Looks like a very beautiful place! Of course everything mellows out and becomes "normal" with time.


How long does it take to understand the problem and write design documents?


According to every project manager/“agile coach” I’ve ever worked with, that would be part of a one-hour “grooming meeting” and if it takes longer than that, you must just be an incompetent developer, in need of more “coaching”.


3 to 6 months... maybe longer. The key is to focus that time towards collecting data to help sell to stakeholders.


There is also too much given to the importance of hitting the date over hitting the product/market polish, value and fit needed for success. People forget a project was late if it succeeds, they never forget a project on time that failed because it was rushed, buggy and not polished production.

Discipline and time estimates are important, but not more important than a product setup for success. One that is easy to sell and market because it has value, is polished and a good experience.

Game development notoriously is late, and as long as the developers have time to finish it right people forget. Everyone knows when a game company and the business/marketing/financial side pushes out an incomplete or buggy product, it can permanently harm perception.

Valve Time is something that is really common game development where engineers/developers/designers/product people are still in charge and making good solid product. [1] Technology, innovation and especially game development needs time to make it fun, a good experience and a solid product.

Creativity can't always be rushed, there must be "open" and "closed" modes of development, not crunched in "closed" only modes. John Cleese has an excellent talk on managing creativity and I recommend all people I work with watch it, very valuable insight. [2]

[1] https://developer.valvesoftware.com/wiki/Valve_Time

[2] https://www.youtube.com/watch?v=Pb5oIIPO62g


I was taught it takes about a quarter of the time to estimate work so if you think it will take 1 day you should have spent 1/4 day estimating. If you think 4 weeks then 1 week estimating.

Whenever I’m asked to estimate I tell them that I’ll take a week to estimate a 4 week piece of work they almost always say don’t bother - then when the made up estimates are wrong we can discuss the fact that they didn’t bother estimating.

It is a funny thing but works every time (in corps).


The much-derided “execs” who demand time estimates don’t do so out of spite. They do so because their costs are measured in man-months, and a business needs to know how much something is going to cost to decide whether it’s worth building. Also, they may have partner companies who they need to coordinate timelines with.

I’d guess that freelance developers who are paid per project, and thus have to own their P&L, are very good at estimating.


I don't know about freelancers, but typical fixed-bid RFP responses are lowballed in order to get the work, then they make their profit on the inevitable change requests. So accurate estimates aren't as important as you might think.


OP here - it's just (somehow, repeatedly) surprising the magnitude to which you can be wrong, even when it's just you and there's no pressure from anyone else to get it done quickly and it's really just an honest guess.

I see this over and over again with people I know bootstrapping projects - the over optimism I think is just part of being a builder. Sometimes in startup circles there seems to be this meme of how you should 'throw together a prototype of the tech in a week' being the standard way to get going with an idea. In practice though, seems so impossible that you could get anything even useful shaped built in that amount of time. For me when I get down to work on something, even just plumbing together the vague structure of an app takes longer than a week full time.


Have you tried to track data for multiple projects? For the specific case you wrote up, is the lesson simply “multiply my estimate by 4 next time”?


I do pretty well at it. My method is to identify all the components of a system and individually assign how many days I think I could do it in if I was really motivated. Then multiply by about 2-3 depending on your experience implementing previously similar modules. Then multiply by 2. I usually end up with a conservative time estimate where some things take longer and some shorter but overall it generally works. On this project that approach would have been... i can do it 3 weeks, give myself 6, and double it = 12. People love it when you come in at or under estimated time and really don't like it when you are late. Give yourself a buffer.


i ran analysis on our codebase, most of the files had 40% rewrites, with some outliers being 90% rewritten. the file sizes followed linear distribution, usually from 100 to 2000 lines of code. when the 90% rewrite hit a larger files, it accounted for almost 50% of total effort put into development. what i got from this, for a single component i take the ideal estimate then multiple it x2, and do the same x2 project wide. there's additional friction when integrating outside components, but i'm rarely blamed for it, so i don't care that much to put it in estimates.

now the hard part is selling the idea to the management, like "here are our ideal estimates, lets multiply it by 4". when doing freelance work i usually bill for the ideal estimated time multiplied by 2, the rest being my risk, on the other hand any changes in the requirements, or miscommunication is a risk of the client.


That is pretty close to how I do it and I'm not the only one who thinks I'm good at it. This only works on a stack and team that I'm familiar with. It works with projects that are days, weeks or a year long.

If you can't break it down you don't understand it and if you don't understand it you aren't ready to estimate it.


It is possible that a project is non-trivial to you but trivial to someone else, then that someone else could properly estimate it. Also most software projects are trivial so I don't see why estimations would be impossible, it is not like your typical react app is going to push boundaries on research topics. They might push boundaries to mediocre developers though, in which case it is a non-trivial project for them.

I think the real reason estimations are hard is that basically every developer is working on projects too hard for them. Like, if you are so good at your current job that you can reliably churn out good solutions with ease then instead of just producing you would move on to do harder things for better pay.


It's magic, but I'm actually pretty good at it. Pretty much every time I'll say the number, my boss will be unhappy and get me to say a lesser number, then it'll be what I predicted. I have a method, but I'm pretty sure it's just magic.


Estimating non-trivial software is easy. Making those estimates any kind of accurate is the problem.

I was talking with my project manager today, and he was asking me if we could finish on time if we got twice as much time as I estimated. My reply was that we’ve tried that 4 times now, and we’ve never reached the point where we actually finish within the time estimated.

So increasing the time would just mean we fail later, not that we suddenly succeed. I’m firmly starting to believe that work expands to fill the time allocated to it.


I find I can "accurately" estimate it by estimating the time for each major feature, and then tripling that number. Typically I ship ahead of schedule, even with glitch ups. Of course if you're in uncharted territory you should spend more time making sure your funding is secured regardless of ship date. Nothing worse than having the rug pulled out beneath you one day.


Its hard to estimate without breaking it down to smaller tasks and this process is laborious. This process assumes we are tackling stuff in which we already have expertise.

The rule of thumb i use - anything which takes more than 3 days needs to be broken down further into tasks. If any task takes more than 3 days during execution then it needs to be broken down to more tasks.


Many people have the ability to estimate non trivial software, given the number of software products which ship on a schedule and don’t slip. Things like features announced at big conferences. Granted, the companies that do this can add people to projects as necessary, so they can work with months rather than man-months.


Estimation needs a baseline. Predicting the time needed for a greenfield (from scratch) project without some experience working on that project with a fixed team ... will lead to misery.


How do you do estimates for contract jobs then? Clients are very uncomfortable when you say “between X and 3X weeks”


Often it’s the assumptions that go into an estimate that are critical, and defining them along with the estimate. Often it’s problem and delays at the client side that delays things or makes them harder.


Deluded people are everywhere.


Developers are optimists.


The thing about estimating is that you can't factor in the things you don't know:

- The tool you planned to use has a bug/defect that blocks you.

- The people who said they could give you some information you need, can't.

- You misunderstood or were misled as to the capabilities of a tool you need.

- The parts you need don't actually fit together as planned.

- The documentation you were relying on is wrong.

- A delivery you are relying on won't actually be ready on time.

- A process that's worked every time has an unhandled edge case that you're going to trigger in this project.

- A regression in a software package will block you because you can't use the previous version for other reasons.

The more moving parts there are, the more likely it is that one of the interface points between them will block or delay you in one of these ways, potentially for weeks, maybe even months.

So for any nontrivial project, you should expect a 2x-8x delivery inflation over your "safely padded" estimate. You have a 75% chance that you'll deliver within 2x of your padded estimate if you've made a good estimation, sliding to worse as the number of moving parts increases.


You can add personal problems that can affect focus to the list


This is great and tallies with my experience.


Developer to self: "It'll take me about a week"

Developer to development lead: "It'll take about two weeks"

Development lead to project manager: "It'll take about 4 weeks"

Project manager to self: "It'll take double that plus 2 weeks"

Project manager to management: "It'll take 12 weeks"

Management to client: "It'll take 8 weeks"

Actual time taken: 16 weeks.


Client: Wow, this is the first project that took only 2x time. We usually anticipate 4x.


>> We usually anticipate 4x.

All software estimation boils down to the long established scientific methodology:

(2 X what the last person said) optionally plus 2 weeks


Years ago I worked with a developer who had a different methodology: 2x, then bump the unit of measure. Thus, 1 day -> 2 weeks; 2 weeks -> 4 months. It's been remarkably accurate over the past couple of decades.


Mine is to multiply by five. It’s pretty accurate.


8 weeks => 16 months, 2 months => 4 years

???

Doesn't seem consistent


> 2 months => 4 years

It's consistently correct.


Except if you are in sales. If so it's 1/2 of what the last person said, or optionally 2 weeks.


Isn't it common for sales people to intentionally give a longer time frame than management specifies? I believe they try to buffer 20%+ rounding, unless there is an exact public release date. Though, I'm not in sales so I could be completely wrong.

In the consulting world there is the rule of three, where you multiply everything by three, not just how long it will take.


IME, sales people usually change the estimates to much lower than those reported to them, leading to no end of frustration and overtime on the engineering side.


I once had a manager who insisted on time estimates, but used them in a way that I actually found valuable. You'd write out all the tasks involved in a project add them all up, and then say "this will take T"

Then he'd say, "okay, what would it take to do what you just said in T/2?" Then you'd cut stuff until you got to T/2. The project then, very often, would take me the original T.

This seemed to work a lot better than just doubling the initial estimate to 2T.


I'm stealing this with the caveat that it could put pressure on a developer to work out requirements, which isn't always what you want. If used correctly, to remove "gold plating" from an MVP, this seems like a great technique.

Another estimation trick that I've found to be very effective over the years is to start with the same sort of task breakdown and finger-in-the-air estimates. Next, start to make a mental list of all the things that could possibly go wrong with each of those tasks. Add those to the list. Most developers with some experience don't add these initially because they don't consider them to be "tasks", but can come up with a huge list surprisingly easily when prompted. Now factor in all the extra time for dealing with those unexpected-but-will-definitely-happen issues.

The original estimate is your best-case, since almost all developers tend to estimate optimistically. The new one is the worst-case.

A good rule of thumb is that the worst-case is 16x the best case, so if you got less than that, it's a good indication you didn't think of enough things that can go wrong. If it does follow the typical curve, 4x the best-case estimate will be a good expected-case (75% chance of success).

What this manager did right was prioritising the tasks, because estimates are just the starting point. The next thing is to know what absolutely must be done for MVP and what is nice-to-have.

If you have 3 weeks to launch an MVP then that MVP needs to be achievable in about 4 days. Otherwise you are setting unrealistic expectations. If you actually finish it in 4 days (unlikely) then well done. You can start adding polish in order of priority, starting with a load test in production to find out as many "unknown unknowns" as possible.


In UI design you do this exercise where you put everything in 4 categories. Must have, should have, could have and won't have. Take your idea and jam it into these for while being 100% honest. It does the similar thing as you mentioned. But truly like the idea


MoSCoW, I learned this studying CS


If you look at that conversation, you'll see that the value comes not from the number T but from the list of tasks which can then be cut.


A lot of devs have trouble decomposing work into small enough chunks. This guy had a way to force that, at least a little.


> Sometimes it takes a conversation with someone else unfamiliar with your work to really question your fundamental principles

This is one of my favorite tools for development. I always sketch out an idea (no coding) and then try to describe it to someone with zero CS background before diving into an mvp. The times when I didn't follow this process - well - just as op described while trying to document the cli, I basically wrote the damn thing 2-3x over. So instead of a couple hours of chatting with other people, I had to spend days/weeks running in circles around my own work. Please, if you have juniors or if you are a junior, try to drive this lesson in.


Whilst working on Box CI, I've had a few conversations that have totally change my trajectory and way of thinking about the product. It's really the best thing you can do to advance things. Sitting writing code is only ever linear - you make progress one hour at a time, very slowly. A 15 minute conversation however, especially with someone you haven't talked to about it yet, can save you days of unnecessary work on something useless or, better yet, get you to an idea that might have taken weeks to come to on your own.


Fun fact, what you're describing it's called dechunking. https://en.wikipedia.org/wiki/Chunking_(psychology) And ofc, rubber duck debugging too. ^_^


OP here - this is a sidebar but the HN hug was making the page load really slowly (~10s for me) even though the blog part of the site is cached. Anyway, because I'm using kubernetes all it took to fix it was bumping the nodes in my cluster, 4x the replicas, kubectl apply and it's snappy again! All done in about a minute. What an awesome tool kubernetes is.


Please take this comment in good faith because I'm genuinely curious: why do you need a kube managed cluster of N nodes (an now 4x N) to host a static blog? I wonder what works goes on under the hood that this needs such scale.


The blog is just attached to the product site https://boxci.dev, thats the reason for using kube. Should be separately hosted but this was just simpler.

The static blog pages though are all cached via nginx so I was surprised I needed to increase the nodes & relicas. Short answer is I don't know, and will need to investigate, but I suspect it's because the kubernetes node instances themselves are fairly low powered, there's a low cpu limit on nginx, and perhaps nginx is doing a lot of work serving the js bundle to so many simultaneous users, which really should be hosted on a CDN (probably also the blog).


> Should be separately hosted but this was just simpler.

Probably shouldn't. If you are already managing an automated environment, creating another environment that is easier to manage has nearly no upside.

Unless that phrase was meant to use the CDN you talk about later, but again, do you need a CDN to solve the easiest part of your infrastructure? (Or rather, do you have enough traffic so that it isn't the easiest part?)


Yeah my phrasing wasn't that clear there but I essentially meant hosted as in cached on CDN servers, just to reduce load on the nginx instances. Though my best guess at the moment for what was causing things to be slow was just sending the JS bundle to so many simultaneous users. Putting that on a CDN may be enough.


The real short answer is: he doesn't, he could just use Github Pages or similar :-)

I'm guessing he likes to tinker.


To be honest that's not it - as in my sibling comment it's just attaching the blog onto the site was the fastest way to go, given that I wanted to keep it on the same domain. The product just happens to be built on kubernetes already, use of kube is definitely not for the blog. I'm actually pretty surprised that nginx couldn't handle the traffic to be honest. Even with the fairly low resources I'd allocated it. It's just cached static content served right from nginx you're seeing there. Now I've seen that this setup isn't really appropriate (at least without being very expensive!), I'll spend the time to put it onto a CDN


If you're not worried about the privacy & co. stuff, Cloudflare is free and it's probably the best option for you.


Try to add autoscaling. You shouldn't be doing it manual :-)


Yeah, you're totally right and I actually will now! Infra wasn't ready at all for the HN hug. I think I'd had at most 20 simultaneous connections prior to this!


The first time my site ended up on HN, I set up Cloudflare because this community blew my site off the internet pretty quickly.


I would consider myself a fairly experienced developer (~ 10 years), but yet I have had the same experience on countless occasions. People always think of you as "being the expert" knowing everything, so I try to explain my situation like this:

"Think of me more like being a journalist. You want a good story and have a rough idea, so I will find the right people and interview them (domain experts, business people), try to figure out how to distill their knowledge into something that is both accurate, yet not too detailed and most importantly has to fulfill the needs of my readers/users (easy to read, yet super insightful, with some pretty images etc.). Wheter the article is about nuclear physics or siamese koalas is secondary, the process is more or less the same, yet I am neither a koala expert nor a nuclear scientist. You are the expert."

I also try to explain why I am unsure about estimating even relatively small tasks like this:

"You know restaurant (or another famous location) Y, right? We both know how to walk, I mean we have had like 30 years of experience in doing that, correct? So how many minutes does it take you to walk from here to restaurant Y?". The more people in the group, the more interesting it might get. Estimates usually differ by a factor of 2-4. People usually cannot even correctly estimate a trivial thing like taking a walk.


The walking example is really good, I also regularly underestimate how long it will take me to arrive to meet a friend walking, by public transport, whatever.

The reason it's an underestimate even when I know the distances and usual times is because I think about the ideal set of conditions (not missing train connections, clear streets for fast walking, being able to find the place we're meeting instantly on arrival rather than looking around for the entrance for 5 mins) and go with that. I never account for the possibility of missing a train connection by 15 seconds and then having to delay the journey by 30 mins. Even though I know, of course, there is a non trivial chance of that happening.


A colleague from a previous job once told me: "when a developer gives you an estimate, always double and add one of that unit" (estimate = 2n+1). He told me this after a while, because he would often nag me to clarify if I meant 7h or 1 day.

i.e. if you estimate 7h, then calculate 2x7+1 = 15h.

If you estimate 1d, then calculate 2x1+1 = 3d = 21h.

It's a silly joke, but it also gives insight about how we perceive time and the elasticity of estimates.

And of course, instead of multiplying by 2, you multiply by a time factor that depends on the risk factors involved (which are also guesstimates, but for me 'x2' would mean a rather low risk factor).


That sounds like a relative of a rule of thumb I read once upon a time: projects slip by the units they're estimated in. N weeks slips by weeks, days and days, months, years, etc.


It's astonishing how many developers resign to the idea that "estimating software projects is impossible". It's impossible when there are either,

- Technical uncertainties (e.g. self driving cars)

- Human uncertainties (multiple different teams building single large software system)

- Scope uncertainties (we don't know what we are actually building until we get into the weeds)

Outside of these we should be able to make reasonably accurate (+/- 30%) estimates. I wrote a blog [1] about it if someone is interested but here are the main takeaways,

- Don't just estimate writing code but also include time required for testing, documentation, communication, setting up infra/deployment

- If certain part is hazy ("is there a reliable python library for speech to text") then research it enough to know the path ahead

- Breakdown the system into smaller units until you feel confident to estimate each piece

[1] https://blog.amirathi.com/2018/02/05/science-of-software-est...


Scope uncertainty is always larger than the certain part. It's often nearly all of the scope.

Technical uncertainty is also more common than not, unless you are using a dying platform, it will change between the estimation and implementation.

Human uncertainties are an avoidable one, if you work alone. If you are in a team, they are also certain.

Or, to put it shortly, yes it's perfectly possible if you are doing a university project alone in an old platform without an active community.


It's hard enough to estimate accurately when you are building your own project with no one breathing down your neck. But be a contractor, where your estimate becomes your salary, and you'll find that you tend to give longer estimates.

I once gave a 3 days estimate for a client project. 2 days to make sure I build it right and a third for testing. It turned into a 7 weeks project: https://idiallo.com/blog/18000-dollars-static-web-page


I really wish the estimation process would get turned around. Let stakeholders decide when they need it, and engineers are responsible for delivering the best version they can on that date. As the project unfolds, stay in close communication and make timely decisions about what tradeoffs are acceptable.

My estimates get accurate once I know where we are on the spectrum from quick and dirty to pushing the boundaries of what's possible. There's a 3 month version and a 3 year version of a lot of ideas.


That's the idea driving agile product development: defining and estimating big projects is hard, so let's do many small projects (sprints). If the stakeholders have a hard deadline, you can ship whatever you have then because your product is supposed to be in a shippable state, though possibly with reduced functionality, at the end of each sprint.


You might find the way Basecamp works interesting: https://m.signalvnoise.com/how-we-structure-our-work-and-tea...

Each project is six weeks. You deliver the best possible version within those six weeks.


Aty place we don't estimate time at all. It's nearly impossible. But value of the fix or feature is loads easier to estimate and measure. Bug $X is worth $10000, dedicated $5000 to fix it. Check progress. Dev says almost done? One more cycle. Still seems far? Punt.

I'm always confounded when managers want a time estimate but cannot provide a value estimate.

The goal is the same: fix important/urgent bugs, develop product and keep good close velocity.


Thanks for sharing, I just quit my sideproject after a year of coding and find posts like this helpful to learn from that experience.

> Then I stopped, took a step back and looked at the product from the perspective of someone who would want to pay for it.

Though I think this is dangerous advise. You do not know what this perspective is like (unless maybe when your product is a “scratching your own itch”-type product) and therefor making design changes makes no sense. You will never know if users maybe even preferred your first “design”, or that there is a different blocker like landing-page, installation-instructions or anything really preventing them from starting using it.


Thanks for saying that as it's basically the reason I wrote the post. Really good to share these lessons with fellow bootstrappers as so many of them are general and applicable between projects, even whole businesses.

Totally agree with what you're saying here and let's see, I might write another post in a few months essentially saying this was a mistake and a waste of time, but yeah if you're interested in seeing a comparison, check out the twitter account for the blog @zero_startup and you can see the old screenshots and the newer ones.


As my colleague says: The time where you know the least about a project is when you start. This is also the time when your estimates are likely to be the worst.

When can you say with fairly good certainty how much longer it's likely to take? In my experience it's about 1/3 the way through the project. I'll give you my reasoning behind this number.

There is a model of software defect discovery called Littlewood's model. It basically says that the rate of software defect discovery is a random variable. As you discover more defects in the code, the number of defects that are left to discover diminish. Assuming you put in a constant effort to discover defects, your rate of discovery decreases. You can estimate the number of defects left in the software, by looking at the decrease in discovery rates over time. There are lots of scholarly articles on Littlewood's model, so I won't go into more detail than that. There are newer models too, but I always found Littewood's "good enough" for my purposes.

It occurred to me that requirement discovery might follow a similar curve. As time goes on and you understand more and more about what you need, the number of things to discover decreases. If you work on your project in a constant manner, the rate at which you discover new requirements will decrease over time. I took data for a number of project and the discovery rate curves were very similar to defect discovery rate curves.

The key is to look at the rate of change of discovery. Once it is tailing off, you can estimate the speed of the drop and get a good idea of how much work you have left to do. Each curve is different depending on a number of factors, but by the time you are about 1/3 the way through the curve, you have enough information to estimate the parameters.

Note: Littlewood's model doesn't actually hit zero, but you can set a threshold that is "close enough to zero". Where you place it will change the 1/3 figure, but I hope what I'm saying is understandable.


In my career I have estimated a lot of project. I tried different techniques and I find https://en.wikipedia.org/wiki/Program_evaluation_and_review_... is the most accurate one.

If I was asked to give some hints, I would say:

- in optimistic and pessimistic cases don't fear to use extreme values

- don't estimate on your own, take reasonable amount of developers with variety of experiences

- don't give exact value to client, give probabilities ("it is very likely that we will be deliver this in between X and Y man-hours/days)

All estimates I made using PERT were quite accurate (<5% time difference between estimated and real execution)


Yes. I was about to link this too.

From my experiences, I've found that 3x my initial "estimate / goal / ego trip timeline" gives the most realistic picture. Sometimes I can hit my best; most often things come up. It took a little while to accept, mostly because I wanted to do my best always and be able to promise and fulfill it. But Reality hit back, and I had to adjust.

It's hard to see the network effects of each micro-interaction in long range projecting, but once you really break down how long each thing could take, how that would affect each other related part, and who/what else is involved at each step/layer, it clears up the fog.

https://en.wikipedia.org/wiki/Fog_of_war#Military


Being asked "how long will it take?" is actually my greatest fear in tech work. Give me long enough and I'm (fairly) confident I can overcome any other problem, but estimating time, even for seemingly trivial things…

<gulp>

I'll probably give everyone an upvote here as a) it's nice to see I'm not alone, and b) there's some really good tips.

Not sure why every tech interview introduces a whiteboard with an algorithm problem when all they have to do is ask me "how long will this simple command line client take to build" to really see the difference between the way I attack a difficult problem!


Im 18 months into a project i thought would take 6. Why? Co founder bailed and i took some shortcuts early on to try and secure some clients that fell through, those shortcuts led to a substantial rewrite.


I agreed with everything right up until the "extra time for design" section. I spent last year building a startup and have thought a lot about MVPs. Coming from a background of building big expensive systems, it took a lot to shake the mindset of "do it right from the ground up". My cofounder did a great job at pushing me towards the absolute minimal solution to learn what we wanted to learn - I don't need to spin up a backend and write CSS, I can use Wix to mock it up and see how that goes, THEN build it properly.

The idea is to learn as much as possible (read The Lean Startup if you haven't). If you build too much in one go, you're going to have tied yourself to one vision of how your users will use the product and be more closed to learning from them what they actually need. Note that you were persuaded by a designer to make the design nicer, not a user, potential user or even product manager.

What was the problem with launching, and then deciding that design was the next priority and working on it while your product was running?

[disclaimer: I definitely haven't got this figured out as we shutdown the company and I'm back building Big Expensive Systems]


> What was the problem with launching, and then deciding that design was the next priority and working on it while your product was running?

Great question, and one I asked myself when I was about 2 weeks into the redesign :-)

On reflection, basically no, you're right, there was no reason I couldn't have shipped and then improved the design in the meantime. The thing about it not being good enough for people to pay for was just my opinion - not tested with real customers.


My personal experience is that traditional software estimation techniques don’t work when done for a MVP situation. Yes it’s frustrating like hell to not know how long before your money, patience and interest runs out. In a non MVP situation some estimation models work, some 40% of time. But managers make it look like they work all the time. Cutting scope to hit deadline doesn’t count.


Also the Viable part of MVP is subjective. As you build the MVP, your idea of what is viable changes. This needs to be accounted for. The minimum product has to be viable for a paying customer. If the MVP is not a paid product the you can get away with less polish, not so with paid products.


Exactly what happened with Box CI :-)

The viable I had in mind was 'it works' - later realised that actually viable was 'I think there is a chance I can ask people to pay for this' - turned out that second part around doubled the already overrun build time!


Yep. Me and my boss estimated rewriting what looks like a really simple Silverlight application in HTML5 in about 3 months time. He's far better at estimating than I am, and a year later we finally did it. Problem was we chose to use .NET Core cause it was more modern, and who knows if Microsoft will dump .NET Framework at some point (and honestly after .NET Core 3.0 I could finally see this being possible). So we upgraded to .NET Core.

Estimating requires you already understand the very thing you're estimating. This is why plumbers, and electricians give you reasonable estimates. Heck when we hired movers after buying our first home they estimated two hours, and were done just a few minutes before the 2 hour mark on the spot.

The issue with programming is, your tools change, your approach changes with new iterations (usually for the better!) and you notice new problems, or sometimes you do one thing in 5 minutes, and your second run through using a similar approach is broken because it requires an edge case you didn't anticipate code wise.


I've gotten pretty close with two 4 months projects this year (2 weeks off in both cases). It took two weeks of planning, research, and understanding roadblocks to get to that point.

It's doable, really, and it feels an awful lot like waterfall when you're in the moment. The most important steps were:

1. Write it down 2. Don't keep it to yourself

Everyone else is smarter than me and provided immeasurably good feedback.


2 is really helpful, 100% agree.

I shared my deadlines and progress on this build via the blog and on twitter under @zero_startup and I would say doing that probably pushed me to ship the MVP at least 50% faster.


Man your telling me. I planned on spending 4 months on https://appdoctor.io and ended up launching it after 1 year and 2 months.


Have heard so many stories like this. It's unbelievable how much longer these things can take compared to what you think at the idea stage. How's it going now?


"What's the idea for Box CI? A CI service that does everything for you, except for running the builds."

Marketing advice: define "CI" near the top of the first page. The word "Integration" is not on the page either. I am familiar with Continuous Integration but not the abbreviation CI.

Adding to the confusion, the font on (most of) the home page is sans serif, so CI cee-eye looks like Cl cee-ell.

Moreover, you mention CLI on the page... though I did know what that is!

Good luck with it.


Thanks for this. Yeah you're right - it's that thing of when you are so focused on something for ages, lots of things about it seem obvious to you that aren't necessarily at all to other people coming in cold. This is useful feedback.


CI is a pretty widely understood industry abbreviation at this point.

> Adding to the confusion, the font on (most of) the home page is sans serif

I mean, except the logo at the top of every page.


Is there any well-regarded book, study, methodology or even individual who claims that the development time of non-trivial software can be accurately estimated?


Interesting writeup! Thanks for sharing.

>Seems easy right - it's a CLI - just document every option and you're done in half a day. That's what I thought too.

>But then you realise that each individual option is its own thing that requires thought to explain, and crucially, interacts with all other options. It's not enough to explain it in isolation. You need to tie everything together. You need examples. You realise from doing all this that there's a simpler way to name or combine options, you go back and make that change and have to update all the examples, explanations, etc.

You question whether your MVP at that stage was viable, but when I read this, I wondered whether it was actually minimum. Is it possible that you added too many options and your MVP would have been easier to ship if it had been more limited in scope?

>The product did not look like something anyone would want to pay for

One pitfall to look out for is that you probably can't accurately predict what customers pay for until you talk to a real customer. Maybe things like aesthetics that seem important wouldn't actually be important to a customer if your product solves their problem. A designer gave you feedback, and they no doubt have valid expertise, but they're also not your customer.

I had the experience of shipping too late last year:

https://mtlynch.io/shipping-too-late/

tl;dr of my takeaways:

* Even if you've been warned about startup pitfalls a million times, sometimes you just have to make the same mistake to learn it.

* The goal of the MVP should be to get something in the customer's hands ASAP so you can hear their feedback. If you focus on polish and design, you might be polishing something that nobody wants.

* Question whether your "must haves" are actually just ways of delaying scary conversations with customers.


Thanks! And thanks for the link, I'll give that a read.

On the 'get it in front of a customer' thing - I agree and struggled with this decision a bit after I'd made it as a result. Probably the best way to validate you're building the wrong thing, or it's not ready yet, is hearing 'no' and hopefully why.

Super interesting factor in this case though is that basically I am the customer - it's really a product I'm building because I want it to exist. I asked myself the question, would I honestly pay for this right now? The answer was no. So at least for customer number one, me, I did kind of have that answer. Maybe cheating a bit though :-)


Yep, the good old "build it and they'll come" trap for the naive engineer that I've stepped into myself. Thankfully, having read Rob Walling's book now I know that marketing comes before code: https://news.ycombinator.com/user?id=rwalling

It's a bit dated now, but most of the advice is still relevant to solo engineers looking to turn solo founders.


I like that book too! I wish I had read it before shipping that first MVP.

My notes: https://mtlynch.io/book-reports/start-small-stay-small/


Follow up to my other comment, read your post and it's extremely entertaining - found myself laughing at (you can probably guess) the estimates part and a lot of the other stuff in there I know so well I felt almost as though you were writing my own thoughts better than I could :-)

Very cool to hear from other bootstrappers / solo founders like yourself talking about things they've done, successes yes but also failures and lessons, and that was also my aim with this post. I feel like it's a pretty small niche within the much larger overall 'startup' world.


Oh, thanks! I'm glad you liked it.


Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.


Army place we point stories, Fibonacci series. Works to help identify a thing over an 8 usually needs to be broken down further. 5 and 8 are pairing stories,1 and 2 for when you have a half day available (depending on the seniority of the dev) or for a Friday afternoon if you finish another ticket.


I think most programmers tend to do their estimates in this wonderful parallel dimension where the code base they will be working on is pristine, with well-designed, documented and thoroughly tested code that they get to work on without interruptions or meetings of any kind.


The thing is in this case I'm building this thing from scratch on my own, so literally without interruptions or meetings of any kind, and all the code, architecture, devops, everything is exactly how I want it, and the estimate is still wildly wrong :-)


That's why, when people ask me to estimate work, I tell them that if they need a hard deadline things will (counterintuitively) take much longer, and try to convince them that hard deadlines are bullshit. If you want a hard deadline, I have no choice but to allocate a huge amount of buffer time, at least twice the "aggressive" estimate so that if shit doesn't go as planned (which is what usually happens) I don't miss the deadline. And I have to plan for the worst possible kind of excrement hitting the fan, which is not realistic. And then, in spite of that, people are still under more stress, because procrastination being in human nature, hard shit gets done last, and there's more in it that can go wrong.

Consider instead shipping stuff incrementally, feature by feature, when features are done, without any hard deadlines. Both the speed of delivery and quality will be improved compared to hard deadline driven development.

Plans are worthless. Planning is indispensable.


I think it depends I work on a large enterprise product we are generally within 20% of the estimate. If some prominent team members were not aggressive in planing that figure would be smaller. Average age on the team is over 40 avg experience around 20 years.


Experience makes a huge difference in planning simply due to know where all bodies are buried and where dead bodies will emerge.


so basically all the common reasons why every deadline is missed. nothing new here.


I started multiplying estimates by 3 and found it to be, in general, much closer to reality once all was said and done.


What is an MVP?



This was my question. I know Hacker News caters to techies, but there are a LOT of acronyms out there. I see MVP and I think "Most Valuable Player", which clearly doesn't fit in this context.


You're pretty fast.


Viability is a KPI and not a set of requirements for some product.

You can't plan for a product to be viable. You first build a product and then you verify whether it is viable. At best you can formulate a hypothesis about what set of features you think is good enough in terms of KPIs and then plan to build only that. However, you have to factor in the likelihood that your hypothesis is wrong and also that the process of building something should result in refining that hypothesis over time. If that doesn't happen, you are not learning and you are probably not really building a viable thing.

The fastest way to get to viability is to take baby steps: short sprints/iterations, ship often, re-assess where you are every step of the way. Do the most valuable/risky/uncertain things as early as you can so you can adjust course if your assumptions about their value turn out wrong. Most startups get this wrong and fail for this reason because by the time they figure out they are on the wrong track they've already wasted most of their seed funding on building pointless things.

The lean movement tends to focus on the M part too much which has a built in risk for products to be unexciting and ultimately non viable. It's great if you are copying somebody else's business model or building some kind of market place. It's not so great if you are trying to do something new.

Lean has a tendency to postpone value creation until you've built a lot of low value commonalities like a login system, user management or crap like that that every startup seems to spend ages on without getting it really right. If you hear the words MVP and Android IOS and web in one sentence that translates as we're building a lot of common functionality three times and all our experiments have 3x the cost. The chances of that being utterly unremarkable and non viable are huge.

The minimal thing would be to postpone that stuff until you have something worth logging into and worth having multiple implementations of on multiple platforms. Building a good mobile experience is a huge investment. Don't even think about it until you have something viable.

You are not proving the viability of a login system with your MVP nor are you proving the viability of a slick IOS experience. Your thing is viable if despite obvious UX and feature issues you still get positive KPIs. Once you get there, you can justify the expense of making it better. So spend as little time as you can on stuff like that instead of making it the top priority in your first iterations.

As long as you are doing minimal things you are not actually creating a lot of value. You are actually postponing value creation and viability. It's the hardest things that create the most value and that are the hardest to plan and the easiest to postpone when planning. Therefore I believe, Scrum is the wrong process for building something that has a high risk/reward balance and the right process for building something that has low risk/rewards. All the value is in the stories that everybody struggles to estimate. Scrum results in most resources getting sucked up by the least valuable stuff.


nice way for marketing. You could have posted a SHOW HN a thousand times and nobody would care, but this is creative way for a succesful SHOW HN


This is one of the core reasons why we've built https://saasify.sh, an extremely simple solution to launch FaaS-based businesses and start validating product / market fit in hours instead of months.

I agree with the author in terms of estimates and their assessment of the process, but the point I'd like to make is that the vast majority of those 12 weeks it took them to launch the MVP weren't spent on core, differentiating functionality, but rather on boilerplate that's common to every SaaS product.

Important but non-core features like billing, accounts, documentation, and a marketing site end up causing many projects like this to fail before they even get launched.

If you can launch MVPs like this significantly quicker albeit in a more constrained yet platform-agnostic way (e.g., using serverless functions as an abstraction), it becomes really powerful for makers like the author of this article to focus on core value and ship quickly instead of wasting time on all of this other stuff.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: