Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're describing having a well defined goal, setting a critical path, and then regularly updating your progress. What we gray beards used to call "project management".


> You're describing having a well defined goal, setting a critical path, and then regularly updating your progress. What we gray beards used to call "project management".

That's not true at all. The whole point of organizing projects around scrums is precisely that there is no well defined goal nor can one exist. What exists are the client's needs, and those needs do and will change frequently and radically as projects move on.

Therefore, the classical project structure followed by grey beards does not make any sense in software development. What does make sense is to acknowledge that requirements specified by the client do and will change, and it's better to adapt to the client's needs and get everyone involved in the project. Therefore, projects are organized in small bite-size units of work that gradually contribute to implement a product based on the feedback received by all stakeholders, and that these small bite-size units of work are implemented in small time windows. These units of work are short-lived and the amount of effort invested in planning is limited to the goals set forth for that unit of work, which need to be manageable.


"The whole point of organizing projects around scrums is precisely that there is no well defined goal nor can one exist. What exists are the client's needs, and those needs do and will change frequently and radically as projects move on."

I don't feel this is true at all. The issue is that most clients aren't willing to put in the time or effort to actually see what their needs are.


> I don't feel this is true at all. The issue is that most clients aren't willing to put in the time or effort to actually see what their needs are.

You're assuming that it's realistic to expect that a requirements gathering process is able to precisely define all requirements, that these requirements will and cannot change, and that software architects have perfect information and are able to make flawless choices regarding the design.

None of these assumptions hold even in conventional engineering projects. The only reason why waterfall processes is used in conventional engineering projects is that it's so expensive to fix problems arising from these sources of failure that it's perfectly fine to deliver working but inadequate solutions.


"You're assuming that it's realistic to expect that a requirements gathering process is able to precisely define all requirements"

No, I'm not. I am assuming it's realistic to expect that a company actually put some effort into finding out what they need, and getting us that information so that we can actually plan out a project.

"None of these assumptions hold even in conventional engineering projects."

Part of the reason for that is that companies don't do any of that research. They don't look at what they need; they think about what they want.

One of the biggest reasons people here dislike "Agile" is because management uses it as an excuse not to plan anything, and fly by the seat of their pants every two weeks.


"...there is no well defined goal nor can one exist."

Consulting is the pejorative we gray beards used for that activity.

You reminded me of another pithy throwaway line:

Agile didn't improve outcomes, it just reduced the cost of failure, allowing teams to fail many more times with the same budget.


> Consulting is the pejorative we gray beards used for that activity.

Those hypothetical gray beards may come up with all the pejorative terms they need, but that only hides the fact that they are entirely oblivious to the reality of running a successful software project, one which actually meets the client's requirements and delivers working products. Therefore, they spend their time coming up with pejorative terms while they insist in wasting their time and effort forcing the proverbial square peg (waterfall) in a round hole (software development projects)

> Agile didn't improve outcomes,

That statement is patently false. No one can make that claim with a straight face in a world where paying customers make it their point to change fundamental requirements on a weekly basis.

In the old timey's waterfall world, a waterfall project that's executed flawlessly is a project that ends up delivering the wrong product that fails to meet the client's basic needs, thus leading to a very unhappy client that may even feel that he has been played.

> it just reduced the cost of failure

A sequence of small failures that converge to the client's needs is a whole lot better than a single unmitigated major failure that's ensured by following Waterfall methodologies.


>but that only hides the fact that they are entirely oblivious to the reality of running a successful software project

Wow, amazing that the shoulders of the giants you stand on was never a successful software project.

>In the old timey's waterfall world, a waterfall project that's executed flawlessly is a project that ends up delivering the wrong product that fails to meet the client's basic needs, thus leading to a very unhappy client that may even feel that he has been played.

Waterfall was iterative as well, when the needs changed, so did the project plan. You bought that line, but that's not how it ever was in my experience. You know those version numbers software has? Those are the iterations. We'd cut a release about every 3 months.

Based on your comment, I suspect you've never done anything outside of scrum / agile, so you are comparing it to some mythical way of doing things you heard about (undoubtably from scrum consultants.)


> Wow, amazing that the shoulders of the giants you stand on was never a successful software project.

Those giants you're referring to made a lot of mistakes along the way.

One of those mistakes was blindly trying to force project management practices that were driven by the need to allocate material resources into projects where the only resource that's allocatable is man-hours. In projects whose success is determined by how well are material resources spent, redesigning something midway is something that's entirely unthinkable and can even dictate the project's death. That's not the case in software development projects, where the only resource is the fixed amount of man-hours that's at the project manager's disposal. With the development and adoption of a couple of software engineering practices aimed at preserving the project in a deliverable state, redesigning entire modules is essentially free. Therefore we end up with project management challenges that superficiality may appear to be the same but are actually fundamentally different.

Therefore, different types of constraints result in different optimization problems that lead to different solutions and require different approaches.

> Waterfall was iterative as well, when the needs changed, so did the project plan.

The keyword you've used is "needs". You're assuming that changes are exceptional. They are not. In software projects, requirement changes are the norm, not the exception. Changes aren't needed, they are constant. If your goal is to meet the client's needs then you need to meet the client's needs, and not some line in the sand that has no bearing with the paying customer's goals.

> Based on your comment, I suspect you've never done anything outside of scrum / agile, so you are comparing it to some mythical way of doing things you heard about (undoubtably from scrum consultants.)

You suspected wrong. In fact, I have far more years of experience in waterfall projects in real world engineering projects than in scrum/agile. Waterfall projects make sense when the requirements make sense. Software development projects based on basic software engineering practices don't incorporate some fundamental requirements found in engineering projects, thus their efficiency can and does improve by following adequate practices.


You and I have very different expectations. Long ago I decided to walk away from work lacking clarity. I’m a dev, not a therapist.

re Outcomes, my data is old, I’d love to proven wrong.

I’ve read (elsewhere) that high(est) functioning orgs like google, facebook, netflix have made great progress. But the rest of us are just banging the rocks together.

PS- PMI & critical path, as I’ve done it, is the opposite of waterfall. One hard earned trick is proper ordering of the work. eg, after a project kickoff, first deliverable is the press release, second deliverable is a demo (even if its entirely faked).


So, agile improved outcomes? I'd love to have 1 big win a year and 11 cheap losses, than 5 expensive losses and then bankrupt before a big win.


But you have to include 'estimates', which ruins everything (in non-trivial cases, yadda, yadda).


Estimating is my Achilles heel. Always has been. I'm way too optimistic.

My workaround was to do jelly bean estimating. Get guesses from everyone, then use the average. Worked surprisingly well.

We did other things to meet that target date, honor that estimate, of course. But that's a longer story.


> I'm too optimistic

Have you ever thought much about why, though? When I was young, I was super optimistic - and I got burned by all of the surprises (including the underspecified or unstated requirements), so I started to estimate pessimistically based on experience. But then I got stuck in this loop: "How long will this take?" "Probably two weeks" "What? Why two weeks? Why not two days? We only have two days. Say it will only take two days, or tell me everything you're going to be doing on an hour-by-hour basis to justify this high estimate!" So now I've just gotten really good at figuring out how long they want me to say it will take, say it will take that long, and shrug my shoulders when it takes longer. Since everybody else is doing the same thing (as they don't have any choice), it doesn't surprise anybody.


I have thought about it. A lot.

I'm told I work too slow, dive too deep. Now I'm trying to figure out how to do just 80%, "good enough".

When I was a kid, I published shareware. After getting a few 3:00am wake up calls from irate customers, I said "never again". I spent 15 years designing, engineering away technical support calls. It mostly worked. Later on... When my team(s) burned a CD, we rarely had to issue patches, hot fixes.

Customers loved us.

I thought that was the pinnacle of human achievement.

Now, I feel like I'm still fighting battles over things no one cares about any more.


Customers may talk about hating bugs, but when you ask them to put their money where their mouth is, 95% of the time they'll buy buggier software with more features.


This. Figuring out how to stop at 80% is a skill I'm struggling to learn. And I don't mean "learn to half-ass things"; in the vast majority of cases 80% is more than good enough to get the job done.


What I've learned in engineering (any form, including software) is take the estimate (acquired in any decent enough way like averaging) and multiply by PI (or 3 if that suits you) to get a reasonable amount. In practice it works more accurately for larger sums than smaller ones so your mileage may vary. I guess it is easier for everyone to guess smaller workloads more accurately.

In my practical experience, I multiply the amount gathered from the previous statement by two to have sufficient negotiation space for people 'higher up the chain' who have no idea about anything of this and simply pull estimates out of thin air or their rear without any proper experience except "previous time it was xxx so it should be the same" (which it almost always is not).

edit: I prefer to have some time left on larger projects to avoid overrunning, even though that might be common practice. Usually though time is cut short due to delivery constraints or selling a product before asking the development and you get the 'wasn't it done yet?' speech.


Here is the "chart" that I use when determining how accurate an engineer's estimate will be. The left column is the hour estimate to look up the volatility from. The right column represents the potential volatility (probability of increasing in time) of the work item based on the hour estimate.

Hour Estimate : Volatility Level

    1 - 5 : Little to no volatility.

    5 - 15 : Small volatility (a day, maybe two).

    15 - 30 : Medium volatility (multiple days).

    30 - 60 : Medium to High volatility (days to weeks).

    60+ : High volatility (multiple weeks).
The reason behind this chart is that when determining what goes into a release there is a tendency to treat all items as static in size. What I mean is that most people will say, "Three 15 hour work items are equal to one 45 hour work item." or "Six 5 hour work items are equal to one 30 hour work item." The worst offender, "Four 15 hour work items are equal to one 60 hour work item."

Additionally, as a work item increases in size the amount of "knowable things" increases as well. Of those "knowable things" there are many known knowns, known unknowns, unknown unknowns (thank you DR for that gem). There's a near 0% chance that a team will get all of the requirements in place up front on a 60 hour case.

If a scoping team is told that they have 240 available hours to work with, then they will pick a combination of items that fit within the 240 hour box. If they have four high priority 60 hour estimate items then they may pick all four for the release. In the event that four 60 hour items are chosen I would place a near 100% guarantee that the release will not go out within the 240 hour time frame. I would even wager that will be up to four weeks late.


> Estimating is my Achilles heel. Always has been. I'm way too optimistic.

That's kinda the whole point. Everyone sucks at estimating. Scrum proposes a way to do "what greybeards call project management" in a way that deals with this universal inability.


I'm having trouble articulating this, please forgive:

I didn't explain "the other stuff" for why jelly bean estimating works.

The number doesn't matter. It's the shared hallucination (consensus) for the deadline. It short circuited, resolved the debate about "how long". It secures the team buy-in.

Once we had a deadline, we managed the work to match. Closest analog I can think of is Kickstarter: initial goal and stretch goals. We had the must haves (stuff listed in the draft press release). We also had dozens of nice to haves we'd sneak in, time permitting.

Agile, Scrum, sprint style estimating is too fine grained, the horizon too near, to do effective estimating.


Nothing protects against toxic management.

"Evidence Based Scheduling"

https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...


I go one step further: Get guesses from everyone, then use the average multiplied by 3. That lets management talk me down by 1/3rd so they feel they've won something, and I still end up with a little wiggle room.


Everyone tacitly assigns a time value to "points" anyway.


Which is fine. The point (heh) is that the mapping can be adjusted in a feedback loop based on what you actually did in the past.


I've always seen there being a maximum size of a ticket for one sprint. So effectively 8, or, 13, or 21, or whatever, is two weeks, and all the other sizes are some fraction of that, and you end up trying to calibrate how you vote for points based on how long things take.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: