Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are successful projects that started as prototyped hypotheses?
114 points by vlaaad on Dec 2, 2019 | hide | past | favorite | 43 comments
As I see it, there are 2 main ends of a spectrum when it comes to releasing projects that aim to solve problems:

- top-down approach: you state your problem, think it through, consider all related work in this problem space, think very hard, come up with multiple possible solutions, evaluate their trade-offs and implement the best one (a-la Hammock-Driven Design: https://www.youtube.com/watch?v=f84n5oFoZBc&t=1816s)

- rapid prototyping: you state your problem, come up with the easiest possible solution, test it, repeat (a-la Lean Development: https://en.wikipedia.org/wiki/Lean_software_development)

I know some great examples of the first approach (Clojure, Datomic, maybe Git?), but I don't know that many successes of the second approach. Furthermore, my personal work experience leads me to believe that the second approach, at least in practice, leads to a lot of wasted effort, so I'm interested to know whether it is so or not.




I get the feeling that the "wasted effort" in the second approach is about rewriting code.

Rewriting code is good in my experience. It's always better the second time around (or third, or fourth). It's not that my first attempt was rubbish, it's that I didn't understand the problem as well as I did the second time (and so on).

Lean is also about avoiding premature optimisation. Which is hard because it cuts against the grain of our engineer sensibilities. Doing something "good enough for now" is tough, when you know that with just a few more days' effort you could make it bulletproof. But I've had to delete "bulletproof" code so many times, because it turns out the product didn't need that feature, or it needed to work differently.

In the long term, Lean avoids more wasted effort, in my experience.


There's a difference between Lean and Instant Legacy Code. The key difference is focus on simplicity including cyclomatic, compositional etc. Simple is not the same as simplistic. Lean as a methodology is meant to remove obstructions and waste in manufacturing. Effort is not a thing that can be wasted. Excess code is the waste. Fixing bugs is wasted time compared to not having them.

"Good enough for now" is a great excuse to keep hacks around and letting them accumulate to the point where code is unmaintainable. Meaning too much code, meaning waste. Even better is "it works now, do not touch" especially when current code base is untested.

Programmers are typically lazy and do not bulletproof anything ever. Thus rampant security issues.

The alleged wasted effort is from the point of view of some manager who doesn't get to tick boxes quicker. (And disregards later massive drop in development velocity while presumably demanding same results.) This means spotted issues are pushed towards never unless a customer reports them. Which they won't or even can't so you get your software brand recognized as buggy trash - with workarounds being commonly peddled among users and devops.


This is probably out of place, but are you OK? I burnt out about a decade ago, and would have agreed with a lot of what you're saying back then. It took me years to get back into a happy place with tech.

Dysfunctional teams and organisations produce this kind of cynical rage, not Lean.


Dysfunctional companies produce buzzword laden or metric based management. (As opposed to good software.)

Lean is not a software development methodology. It is made for factories and production lines, a terrible fit for most kinds of software. The only salvageable parts from it is iteration and listening to frontline workers to get process improvements. "Autonomation"/Poke Yoke as in automated tests. Which is not enough of a methodology.

The "Lean for software" page gives contradictory definition of waste - you're supposed to minimize defects while at the same time minimizing rework. I'd like a crystal ball that enables it. Plus you cannot apply it without absolute control over the whole development process. Any place that is a black box (say, both set of features and deadlines are given) the process. Thus it fails in corporate environment.

Likewise, general agile methodologies are easily perverted into what I just described - by skipping refactoring and redesign parts in service of deadlines. That model works only if you throw things away like startups do or the project is small and self-contained.

Usually small projects are low value or grow big. C.f. Twitter or YouTube when it started and now. Even worse if you get to interact with quickly changing parts controlled by another team you do not control in even a medium sized project.


I agree with everything you said, but I must point out that Lean comes from Toyota. While most people know about the production system, which is indeed applied to factories, Toyota applies this to the product development too. Product development is much closer to what software development is. Unfortunately there is a lot of misinformation on the topic, but there is a very good book, called "Toyota Product Development System", which describes how Lean is applied there. There is an insane amount of valuable information in there, every software company should be at least aware of those engineering practices.


Any project that has both features and deadline set by management is going to fail. That's a fact of software development. Lean/Agile doesn't solve that, or even attempt to.

It also doesn't attempt to minimise rework (it values iterative approaches), and is strong on exploratory prototypes.

Again, I think what you're criticising is "Agile as implemented in dysfunctional organisations" rather than actual Agile.

I'm building a startup, though. My definition of "good" software is probably different to yours (and that's as it should be).


This is a common view these days. But as a technical founder, I disagree with this view. Once you launch the bare minimum first version, there are different customer segments who pull you in different directions. Often you chase one path and find that the customer segment is not lucrative enough and chase the next. Till you find a compelling usecase with money making potential, you will end up with various set of features used by different customer segments.

This might still be less wasteful when compared to building an entire product and finding no customers. But, it is taxing on the technical founder! Lean washing shouldn't set a wrong expectation for the technical founder involved in startups that follow the rapid prototyping approach.


man, I feel your pain. I've been through this a few times. Just about to go through it again. Yes it's tough. I keep hoping that there's a non-tough bit later when most of the technical problems are solved and we get to sit back a bit and watch the business folk hustle. Haven't found that point so far though


I agree, ultimately you never know if your effort was wasted until the prototype is made and you know if it's successful or not which is the only purpose of a prototype.

> It's not that my first attempt was rubbish, it's that I didn't understand the problem as well as I did the second time

I think thats too soft, I know my first attempt will be rubbish so I intend it to be so. To me the point of a prototype is to help you learn the problem more than to solve it.

If you plan to keep your prototype if it works out then I think you have missed a trick, a prototype should aim to fail quickly.

If your prototype is useful then I think it fails its point as a prototype.



Maybe the OP needs to clarify what type of "problems" exactly he is referring to..the examples he gives point to problems requiring significant engineering efforts, which is very different from the examples you note above.


The difference is how well defined the problem is. Clojure, Datomic, and Git all benefited from predecessors that were used extensively and had definable shortcomings with a clear enough technical solution.


Yes. I would say people or teams which developed solutions that look as if they "just appeared out of careful consideration" did not -- they simply learned from others and took in that prior world knowledge. The solution space for most problems is not "one global optimum" -- use cases have to be taken into account (which is why great projects like Clojure are not used for everything).

I would go so far to say there isn't actually a dichotomy here -- you should be swapping between launching something with a hypothesis (in lean mode) then gathering feedback and considering alternatives as you are proven correct/incorrect (hammock mode). I think Galls Law [1] is also relevant here:

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."

If all you do is think and think, then you open yourself to mis-timing a solution, feature and scope creep, and risking "unknown unknowns". If all you do is launch and incrementally iterate you'll be stuck solving very narrow problems.

[1] https://en.wikipedia.org/wiki/John_Gall_(author)#Gall's_law


Top-down generally only works for a well-understood problem domain, and even then it only holds up for focused projects where you have the power to declare what is in scope and out of scope in a very strong fashion. This works better for dev tools, libraries, middleware or other projects that are abstract and not tied too closely to a specific business or end-user goal. In other words, the more abstract the tool and the more technical the audience, the more likely that you can drive massive impact while maintaining a simple vision and avoiding all kinds of edge cases and incidental complexity.

Rapid prototyping is more optimal for any end-user product or any new domain, because it’s a faster way to discover the unknown unknowns, both in terms of user features as well as well as technical challenges you may not have anticipated.


I believe Git to be actually part of the 2nd than the first group.

Linus built the working prototype/self-hosted in 3 days mixing a lot of his learnings from bit-keeper and his knowledge of disk management [1].

To me, that's rapid prototyping. It's enough domain knowledge to make it work for himself well. He didn't spend a bunch of time thinking nor coming up with solution since he was actively building Linux at the time. The key is he employed the help of others to build Git and eventually take it over since he wanted to focus on Linux.

This all comes with a huge caveat in that Linus's 3 days == 1000 of mine. His 'just enough' knowledge is near expert level.

As others have asked, what are you trying to build? A technical solution or an end-user solution.

Technical solutions do require a lot more domain knowledge than a twitter/airbnb (at the early stages).

In the end, I believe in rapid prototyping and failing fast[2]. Learn just enough, whether technical or end-consumer to launch fast.

The thing I agree 100% is though, don't break user-space [3]. I believe this applies to end users of products, whether developers or customers. Once people start consuming something, don't break it. Doesn't matter whether you believe it to be 'correct' or a 'bug'. Expectation management of slow and easy depreciation.

[1] https://en.wikipedia.org/wiki/Git#History [2] http://paulgraham.com/startupmistakes.html [3] https://lkml.org/lkml/2012/12/23/75


Linus’s three days of coding effort was probably preceded by months/years of thinking abut the essence of version control for a large distributed development project (a la the Linux kernel). It’s difficult to just stumble on what became the internal structure of git in just a few days if you just started thinking about version control systems.

So, for me the distinction between the two approaches (prototyping -vs- hammock driven) is lately about whether you are solving a largely known/understood problem (equivalent to having domain expertise, in an absolute sense) -vs- solving problems to which you don’t know the answers. In the latter case, there is no shortcut getting around thinking time.

Or, as they say: “A month in the laboratory could save an hour in the library”


I feel like the question is mixing 2 things.

One is using a top-down approach versus an iterative approach. The other is about the nature of your problem: do you have product risk or market risk?

The lean approach is about eliminating waste, which, in the context of startups, often means building something small and talking to users. But that's only because most startups have market risk. If you have product risk, you should still iterate on your solution instead of building it in one go.

I feel like you are asking for examples where the market-risk was addressed. The most interesting companies would be those where the first test was a total miss and they solved a totally different problem in the end.


I don't think your dichotomy is valid.

Git definitely doesn't fit the first approach. Not sure why you would state that.

Maybe the core of Clojure, with the persistent data structures, fits the first approach, but I doubt the rest of it does (speaking as an outsider to th eproject).

"Implement the best one" belies a lot of sweat and places where it could have gone wrong. In other words, the initial thinking is not even addressing half of the problem or doing half the work.

The philosophy of Clojure itself is very much based on iteration and interactive programming. You need a lot of action, feedback, and iteratino in addition to the "think very hard" part.


Wouldn't Lisp count for the top-down approach? Only that most of the thinking was done without thought spent on an actual implementation, and the actual implementation was done by different people who recognised the practicality of it?


Dark (https://darklang.com) was a bit of both. I spent several years thinking about the problem, and once I had a solution and decided to work on it, went into rapid prototyping to figure out if it could work. To a certain extent, we're still in that phase, just with a much bigger team now.


Two examples of rapid prototyping spring to mind:

* Twitch, started as one guy streaming his life then they realised lots of gamers were watching, and that they'd like to be able to stream https://www.youtube.com/watch?v=FBOLk9s9Ci4

* Segment, started as a thumbs up/down tool for professors in lectures to work out when students are getting confused. They realised everyone just went to Facebook instead, then they wondered why they couldn't tell this when they were remote! https://www.youtube.com/watch?v=l-vfn97QTr0



have you ever seen an early write up of the technology that powered that demo. I know it was all written in python but I was curious how they were doing it early on.


This talk has a lot of details on the early version.

https://www.youtube.com/watch?v=PE4gwstWhmc


They used rsync afaik.


Dropbox is a wrapper on rsync.. amazing.


The reality often is a mixture.

If you don't know your problem there is nothing to prototype, no minimal viable product. Nothing.

If you spend years analysing and planning you get nowhere.

You need to have an idea, a problem which has to be solved, but should not be lost in the forest.


At a strategy level, which one of the two points you end up doing depends on who commissions and evaluates the work. If someone hands you a spec and then disappears, only to come back 5 years later and expect to receive a finished project, you'll be working "top-down". If you're trying to solve someone's immediate problem with software, and are in regular contact with that someone, you'll be working in "rapid prototyping". All projects, software or otherwise, are spread around the spectrum between the two endpoints. Where exactly depends on specifics, but if you're starting a new project, the consensus is that you should aim to be closer to the "rapid prototyping" end.

At a tactical level, feature level, you mix both. You state your problem (or get it stated to you), you think it through, hopefully considering at least some related work and doing some hard thinking, come up with multiple possible solutions and evaluate their trade-offs... by implementing their prototypes as fast as possible, because that's the only real way to discover the trade-offs. Depending on how much in a hurry you are, you might pick the first prototype that isn't a total disaster and build your feature from it, then test it, and repeat.

See how "top-down" and "rapid prototyping" is interwoven here. This approach can be expressed as: think before you do, but remember that you only learn the true scope of a problem by attempting to solve it.


I like and have successfully used the Business Model Canvas approach. You fill in all assumptions. Test the riskiest one in the simplest manner, then move on. E.g. if you're not sure you can find the right partnerships, look for that first. If you're not sure the value proposition would work, interview some people, make some mockup PowerPoint slides, and so on.


My personal avalanche detection project https://avanor.se (I haven't started the image uploads for this season yet), seems to fit the second model. It's very simple and small, but is successful in terms of being a prescribed tool for professional avalanche forecasting in Sweden.

I think it's on it's third rewrite or something right now, and runs circles around the only other service in this space regarding bang for the bucks (guess my budget, its smaller than that).


Just because you build a project with the second approach, it doesn’t mean you haven’t thought about alternative solutions or design like defined in the first.

A lot of times it’s useful building products by rapidly iterating because you see flaws, holes in your thinking, and get feedback immediately from people who are going to use it.

Immediate (or shorter term) feedback can be very helpful.

But to answer your question, YC talks a lot about Twitch being an example of the second approach.


A large working system must evolve from a small working system. There's no way to start large.


I think Django is a reasonable example of the second approach.


Human powered flight.


Are you thinking of this as a value driven approach?


solve a problem you have, it might now scale or spread, but you received some benefit for it.


The United States


Both approaches are wasteful but they waste different things.


Can you expand on this?


The Internet


religion


biological life


PageRank




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: