Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The trick is to break it down until it’s all trivial. That process takes time itself, but you can estimate that much more easily. I’ve recently found I end up giving an estimate of, say, 2 days to get a solid estimate, and then come away from that with 2-5 weeks of tasks that are no more than a day each. Estimates of a day are pretty accurate (for me).

Another approach I had some success with was estimating the 80% likelihood case, I.e. I’m 80% sure I can finish this in the time I’m estimating. Less accurate, but much more predictably on schedule, which is more useful for many stakeholders.

A last one is giving rough estimates and being clear about how rough they are and how much time it will take to get more confidence. I was handed a 50 page PDF of API docs for a service and asked what the estimate was for integration. After 15 minutes I said “1-4 weeks”, if you want more accuracy then I’ll need a few days. The answer was “no problem, we wouldn’t consider it unless it was < 1 week”.

These examples are all on the weeks to low number of months scale, but the same applies further up. Having a good understanding of the codebase and domain are very useful.

Estimating isn’t easy, but treat it like any skill, it can be improved. It requires some flexibility from product managers, but open communication goes a really long way and results in better decisions overall, less wasted effort.



This is better than not breaking it down, but it's still not very good. McConnell advocates this approach strongly in his book on software estimation[1], and I've seen it work to some extent. It works for reasonably repeatable projects that are very similar to earlier projects you have experience with. But when we get into seriously non-trivial projects that are more "R" than "D"... well, it just isn't enough.

The problem is that on a non-trivial project, these breakdowns are guesses which carry considerable uncertainty. And the less you know, the more the unknown-unknowns get you. If you write a detailed breakdown of a major piece of software before writing a line of code, at the end of the project you will look back and laugh and laugh at your own innocence.

As just one random top-of-mind example, consider Carmack writing Doom's rendering engine[2]. He tried several approaches that didn't work before striking the right balance with pre-computed BSPs. Some of the things he tried were fundamentally good ideas and would later be used for different games - but didn't work on the hardware of the times. How do you estimate something like that? "I'm going to spend a month on my first approach. That could be one month or two. If that doesn't work, (50% chance), I'll read research papers and textbooks (1 week - 3 months) until I find a promising technique. Then I try the most promising of these; since this is an approach I haven't even read about yet, it could be one week to implement, or six months. And there's some chance that will fail to and I'll have to do it again." The final estimate is anywhere from one month to a year. You just can't know.

[1]: https://www.amazon.com/Software-Estimation-Demystifying-Deve...

[2]: https://twobithistory.org/2019/11/06/doom-bsp.html


I have that now in a project. I have two things that need t o be made where I just don't know if I'm going to get them to work well within the larger design I have in mind. I wanted to prototype them in the summer but the backend wasn't ready so I only had very small mock data served in an unrealistic way.

The only thing I can do is move them to the front of the project as much as possible, so we have the risky bits as early as possible. Once we get them to work the rest should be smooth sailing, but I'd rather fail when that time wasn't spent yet. PM agrees.


That’s why you do a time boxed spike, ie a fixed amount of time to do the investigation so you can work out your preferred approach. After you have sufficient knowledge then you can estimate.


Some things can't really be broken down into it's trivialities unless you already know the answer.

E.g. I worked on a warehouse app before. None of us ever had before.

If you have no experience with any of this, how do you break down picking: SKU design (ours weren't just random numbers), barcode printers, and barcode scanners to be used in your software project? That part alone took us about 7 weeks. Including a complete SKU redesign because our initial test hardware worked better than some of the stuff we got later.

Nevermind all the hardware integrations, which was also new to us: multiple printers (barcode label, shipping label, pick list, packing slip), barcode scanner, a scale, all seamlessly integrated into the warehouse application.

This also reminds me, one of the services we decided to use had a serious (for us) bug that we didn't find out about until late in the game. They didn't fix it for 3 months and it blocked us for a while. So yeah, you could be very well end up integrating more than once. You can't really break that down into trivial parts.


I do something similar to the OP when working on established systems.

The trick is to treat everything that you can't break down to a triviality as an area of research until you've solved that problem.

So, say, you plan three days in a sprint to research a certain requirement, the results being a set of small east-to-estimate features and maybe more tricky features.

If you're working in an area where your team have absolutely 0 experience then there's no way you can estimate anything accurately. In these cases I hope that you're working with a small team (2-3 developers and a BA/PO) who are highly experienced and work well together. Then you should run flexibly with features. Work Kanban style and implement the 80/20 effort/win features - and don't be scared to drop a feature if it's looking like it's not part of the 20% effort group of features.


The biggest problem I've found with this approach is that it drives the prioritization process towards smaller and smaller pieces of work, because those can be accurately broken down and estimated. This means larger, but proportionally much more valuable, pieces of work do not get picked up.


Hmm, I disagree. The larger and valuable pieces are broken down into quantifiable parts, but are still being done. Less unpredictability, and that’s the point.


I can only speak to my experience. The friction involved in slicing those pieces of work ends up creating a sizable force. I'd love to hear more about how you've handled it well.


Well that force is the design effort. “How do I break this up in parts?” “I need to test this”, “I need to isolate that or it’s not manageable”. Otherwise you’re just cowboy-coding, shaving yaks along the way to implementing the Epic Goal, leaving a trail of legacy


Again, this is just my own experience, but what I've found is that, once you break the valuable and large piece of work down into small pieces, those small pieces don't get prioritized, because each piece on its own isn't tremendously valuable, so they get outcompeted by other small pieces of work that deliver their own standalone value earlier on. I think the best answer to this is to advocate individually for large work of this type, but I would rather be able to work it into a repeatable process.


Breaking it down until it's all trivial is the most non-trivial part of the exercise.

If everyone could do that trivially, then there would never be any problems.


Breaking tasks down is like writing another program. Plenty of tasks branch off based on their result; you end up building a task DAG and not a task list. You could try to estimate that with Monte Carlo, but in reality, you won't get the correct shape of the DAG up front anyway.


And the most time-consuming.


You cannot, by definition, break down a non-trivial project into trivial parts without having at least 1 non-trivial part in there somewhere


The non-trivial part might be combining the trivial parts. :)


This is a good point and something probably not caught in the estimation of the smaller trivial parts.


I think most software falls into this category these days. The majority of software based companies are automating processes that humans can or could do manually - they're just being integrators, linking things and tasks together in useful ways.

I don't have a statistic, but I'd be willing to bet the majority of code in existence is not algorithmic in nature.


I agree. I've been writing code for 20+ years. It's all been BPA code. Only once, very early did I do any complex algo.


BPA?


Business Process Automation


Right, and it will take potentially unbounded time.


Think about how variance propagates through that chain of work though,especially for tasks that are contingent on other tasks...


This is underappreciated. My experience on a carefully-estimated software project was that the median task would come in 20% under budget. However, we still ended up over budget.

Every iteration there'd be one or two small tasks that blew up to >1000% their original estimate.


I've estimated dozens of projects over the years and in my experience the more you split it up the more the total estimate is going to be. You can even use this a tool to get more budget/time allocated to your project. Break it up as much as you can and no task is ever going to be smaller than 1 hours, but in reality you can do 10 of those tasks per day because if you bunch those 10 tasks together people will estimate 5 hours of work. But now you have suddenly got 10 hours to do the same work!


I'd extend this by aiming to have each trivial step be a useful result that can be immediately released to market. If you can't make every step useful, try to get as many as possible useful with a minimum number of steps between useful results. Put a hard estimate on the first few trivial steps, but recognise that it's largely guesswork beyond that. If the trivial product can be released and start earning money then that reduces the pressure for accurate longer term estimates, as the earnings can fund the uncertainty. Recognise that this is an unachievable ideal, but it is at least worth striving for since even an imperfect attempt will deliver some benefit.


The problem is that in many business domains, the absolute minimal viable product, or even a customer-worthy demo, is a 6 month long, 10-15 person project. If the market already has complete solutions available, releasing an app which does 5% of what other apps do + 1% that's different is simply not viable, and is likely to help you never get a second look.


Well, yes, kinda, sorta. If you go into a market that already has a complete solution available, and that solution works well, you need to be at least as good as competitors.

But the way you go into a market is by figuring out what doesn't work. What is an unaddressed pain point. This is a hard thing to do by definition (that's why the payout of being successful at it is so big), but if you manage to do that, you don't need 10-15 persons for an MVP... in fact I'd argue you actually can't truly do an MVP if you have a 15-persons team. You need the large team to expand the MVP, but to identify it/ have a customer-worthy demo, the 15 people will just get in your way.


The view in my company at least is that even if you have a strong differentiator feature, if you don't have more or less all of the baseline features that all the established products are doing well, people will take a look at your MVP because they like your killer feature, try to use it, find out that it doesn't do all the things they need, and never look at it again, even if you do fill those features later on.


That can't be true or no startup would ever get off the ground. Just look at any Adobe competitor and see if they started fully featured (most actually brag about their lack of features compared with Adobe products)

The more likely explanation is that your "strong differentiator feature" is not really as strong as you'd wish it was.


Sounds like "how to build a successful business".


If you can break them down to such a lower granularity, doesn't that imply the software is trivial? To take an extreme example, how do you break down the software developed by Waymo?


If you can break them down to such a lower granularity, doesn't that imply the software is trivial?

No it doesn't imply this. One granular piece of Waymo's software is "develop a component that understands the external environment as well or better than a human". It's easy to say that, and it's even relatively easy to break that down into risk scenarios etc.

But developing it is.. non trivial. And there are parts in it which it was unclear if they were possible.

But that is because large parts of Waymo were a research project, not software engineering.


Right, I'm not saying they can't be broken down somewhat, but the parent comment said he broke then down until the tasks are trivial. I don't think that example is trivial! Or that it can be estimated to take up about a day.

Regardig the "research" vs "engineering", I don't think you can cleanly separate the two; many otherwise straightforward projects include a research component, even if it is just about using a new browser feature in a novel way, or some such.


For me, research is something where is is unknown if something is possible. Engineering is where we know it is possible, even if it isn't clear how long it will take.


Under that definition, very little is research. Warp drives is research. Curing cancer is engineering.

Obviously, you have implicit constraints in mind. "Is something possible given $constraints?". And the set of those constraints is where the line between research and engineering starts to blur. If you set $constraints to "in time and under budget for the project", which seems like a very reasonable value, then it turns out that a lot of programming tasks become research work, unless you're only churning out cookie-cutter websites.


We don't know if (all) cancer can be cured at all.

And no, "in time and budget" doesn't suddenly make something into a research project.

Speeding up some technical process might be research (eg, the old 1TB sorting benchmarks drove research).

But project constraints, and the fact that the team themself has never used a specific technology or whatever doesn't turn it into research. Yes, I know that's a term people use, but it's a different thing to scientific research.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: