Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Doing old things better vs. doing brand new things (a16z.com)
149 points by razin on Oct 18, 2020 | hide | past | favorite | 46 comments


> Another example is business productivity apps architected as web services. Early products like Salesforce were easier to access and cheaper to maintain than their on-premise counterparts. Modern productivity apps like Google Docs, Figma, and Slack focus on things you simply couldn’t do before, like real-time collaboration and deep integrations with other apps.

I'm not sure I'd consider any of these brand new things, even in the context of cloud nativity. CSCW works in particular go back to the mid-80s! They aren't new things, but iterations on old things -- the iterative gap is so large that they _feel_ new, but the changes they provided improved accessibility to existing functionality, not innovative new functionality.

- Networked CRMs were preceded by so, so many database frontends.

- Google Docs was bought-not-made, and its components were also preceded by peer-to-peer collaborative editing tools outside of the web browser (SubEthaEdit, but also Instant Update, Aspects, and other early 90s/00s groupware suites).

- Collaborative drawing and design software go all the way back to Xerox PARC projects like Videodraw.

- Slack wasn't even first of its breed!


I would also say that Google docs took the a traditional desktop word processor like Microsoft Word and put in a browser tab. Supporting multiple user edit sessions simultaneously was brand new, but not sure it was transformational in terms of expanding the word processor into something else.


Took the traditional desktop word processor and chopped a bunch of features out.

Unfortunately some of them, like captions for images / charts / diagrams, numbered headings and the like were really useful


I know a lot of people who adopted Google docs because it was A) free B) didn't require a download C) they had already bought into Gmail.


I would also question just how valuable concurrent editing of a text document is. it's cool that it's technically possible, but it doesn't seem that useful for multiple people to be editing the same sentence or paragraph in real time. in practice, I find that people divvy up the document into sections and each work on their assigned section independently. this makes google docs / office 365 a moderately lower friction version of everybody working in their own documents and copying changes to a master doc with version history, which has been a supported workflow in desktop word for quite a long time.

similarly, slack can be pretty useful, but I doubt it feels revolutionary to anyone who grew up using IRC.


> this makes google docs / office 365 a moderately lower friction version of everybody working in their own documents and copying changes to a master doc with version history, which has been a supported workflow in desktop word for quite a long time.

It’s more than just this. There is a big difference between authoring parts of the document alone, and being able to see the current state of the document as a whole at all times live while people are writing it.


The hawthorn effect is actually a negative influence of such collaborative writing in the general population, too.


I’m familiar with the Hawthorne effect but I don’t understand what you mean here. Could you clarify?

Do you mean to say that collaborative document editing is perceived to be more efficient but that it is in fact not more efficient?

Assuming that is what you meant, that may be true, but for me when working on a document with others there is more to it aside from saving time on writing together. Specifically, being able to see the document as a whole means that I can write sections of text that are more coherent with the rest of the document, and also that I can suggest changes in other people’s sections while we are working on it.

Collaborative editing is also useful when discussing a document on teleconferencing or when we are in the same room. For example if we are talking about a budget and we all have the Google sheet open I can point to a cell or group of cells, say something about it and make a change immediately in accordance with what we conclude, and they can make concurrent edits elsewhere in the sheet for other parts of the sheet that are affected by our discussion.


Concurrent independent editing is mostly useless.

Concurrent commenting, discussing, suggesting and accepting suggestions / amendments is mightily useful.


Sounds like someone's never been locked out of a document they need to edit.

It's honestly tough to remember just how bad check in check out document control was a few years ago. Remember the days when one person did their section in a totally different style from the rest of the document and you only discovered it at 3 am the morning it was due because they waited until the last minute to email it to you and then at 5 am when you had finished reformatting it you realize that all his references to another person's section are incorrect because he wasn't at the meeting where that person said they reorganized their part, so you fix that until 7 am and then you go to add in the bibliography but the guy who had that last sent an old version in his email so you have to call him and get him to send you the real version, and the next thing you know it's 9 am and you need to give a presentation on this 400 page document after getting no sleep and not even having time to grab a cup of coffee.

I honestly don't understand why anyone wouldn't take advantage of concurrent editing.


at work, our typical document collaboration workflow is to have the master version in our VCS (with exclusive checkout), then we check-out, edit, check-in to push changes. small edits get made directly in the master document while larger changes get staged locally and copy-pasted. maybe I just have process stockholm syndrome, but it really doesn't seem that painful. the main bottleneck is getting all the stakeholders to review each round of edits, not lock contention for the editing itself.


Yeah, that was the standard document control method 20 years ago everywhere. It works, but it's incredibly inefficient.

Such systems make any sort of global changes extremely taxing (so many people need to sign off on it, and there's going to be loads of bike shedding), and naturally favors heavy compartmentalization. The work winds up divvied up among many people working in parallel, but they're not collaborating - you're waiting on mike to finish his portion, not working with mike to make a better document.

It's also very difficult to efficiently divide labor - for example let's say you have an equation heavy document, you could have one person do the first half and another do the second, but it would make more sense for one person to do the text and another person to do the equations as that way neither of you have broken flow. This is possible but a real pain in the butt to do simultaneously with check-in check-out, alternatively you can have the one person do the text and then hand it off, but that might be very suboptimal if you are time constrained. This sort of work is a breeze in a modern collaborative environment though.

Finally, from a true document control perspective, the old check-out method seems sensible as you know what changes were made between when it was checked out and checked in, and you know who checked it out, however you don't really have any better knowledge than that. With a modern system you can see who typed what and you can see every edit they've made, including when they've typed out a long section then decided to delete it and try something else. This can be incredibly useful for keeping track of how an idea evolved. It's also easy to see who actually contributed what: while traditional document control will tell you that so and so made a change to some section, if the change was just pasted in it's difficult if not impossible to tell what the difference from the previous version was. Finally while traditional document control focuses on limiting the chances of a mistake, modern methods allow you to fix mistakes easily.

Now I'm not saying that there's no way to get what you need with check-in check-out document control, but we've reached the point where you don't need to make your work compatible with the process.


For Google docs shared editing predecessors, don't forget the mother of all demos from the 60s.


Presumably DOTB and DBNT should be pitched to investors -- and funded -- in very different ways.

Doing old things better is very amenable to metrics, lifetime value of customer, market-adoption forecasts, cost of customer acquisition, etc. This is the land of venture money with milestones and ever-growing rounds as success unfolds. (Or asphyxiation if success doesn't happen.)

Doing brand new things probably is far more likely to start via bootstrapping, passion projects, university research and other areas where open-ended curious inquiry is tolerated better. Dixon (OP) loosely acknowledges this when he writes that entrepreneurs "usually spend many years deeply immersed in the underlying technology before they have their key insights."

Eventually the path forward will be clear enough that the VC/metrics folks can show up for later financing rounds and do their thing. But I'll argue that venture's metrics-driven feed-or-starve system is more likely to mess up this early exploration than to help it.

Even though VCs will insist that they are equally good at both.


I would assume it should be the other way around because of different risk levels.

Doing old things better is relatively low-risk (because the thing already works) and therefore should be funded by debt, not equity.

Doing brand new things is high-risk (nobody did it before), so equity-based funding (maybe after a period of bootstrapping) for it would be more appropriate to the VC model.


Interesting, thanks for your comment. Genuinely curious, why do you think that DBNT is better for bootstrapping? Intuitively I would've guessed that bootstrappers don't have the resources to create entirely new markets. In existing markets, the value proposition should also be more clear and battle tested, no?


I'll nominate GoPro as case where bootstrapping's quirky virtues helped make everything click. Doug Woodman's basic idea dates back to 2002 or maybe earlier. His early iterations of ultra-mobile cameras were slightly successful but not yet the winning version.

By poking around, subject to the uniquely loose/tight constraints of what he could fund selling beads (!) and early prototypes, he bought himself a lot of time to try to get it right. Perhaps it was a case of available technology needing the better part of a decade to catch up to his vision.

With a big, impatient burst of SV funding at the start, version 1.0 of GoPro might have been a Segway-like flop.


I used to wonder why there's always a constant flood of new new technologies, frameworks, products, etc. instead of improvements over existing solutions.

Now I believe I know why; it's about growth for the business, at all levels. New technology may give the business a competitive edge but more importantly it will give the developers who created it as well as those who learn it, a career boast.

This is why we get things like software updates for a UI that no one asked for.

In the end, the core idea is that "good enough" is never enough. Because improvements, not maintenance, drive careers forward.


Maybe. I think it’s less about career growth — though I’m sure that’s the goal for some — and more about the reality that it’s often more fun to build something new, rather than contributing or adding-on to something that already exists.

A new framework is shiny and exciting. It doesn’t matter if it’s a rehash of something that already exists, with a few minor changes or some differences in approach, it's new and there is a sense of excitement around building something new that just isn’t there when you’re contributing to something that already exists.

Part of the reason stuff like NIH (not invented here) is so common isn’t just because of the belief, true or not, that an in-house solution will be preferable, but because augmenting and improving and refining something that already exists is hard.

But the point of the essay, as I read it, is actually making a very different argument. It’s not talking about how frequently the same concepts are reinvented or remixed, but about how when there are genuine breakthroughs in technology (however you want to define it), the initial impulse is to use those breakthroughs to solve the same problems as whatever existed in the past.

But doing that, while pragmatic in the beginning, can distract from what makes a that new innovation so innovative.


> improvements, not maintenance, drive careers forward.

Google puts a piece of software on death roll by declaring them “in maintenance mode”, that's the state where no matter how difficult it was to work with, and how valuable are the improvements made to the software, the results carries “negative” weight in perf review.

That is, one who devote considerable time on a software in maintenance mode is considered working in conflict with the norm, and should be taught to behave more correctly, and sometimes punished for consistent effort of not reverting that behavior.


Why doesn't Google open-source the projects it has killed?

Hubris.


They cannot be killed easily.

They also cannot be easily open-sourced as answered below.

They were put on death roll. In the sense, that people with authority deemed them rightly deserved to be killed.

But any software that needs authority to be declared dead, certainly have a life of their own. One such case is when the practitioners see the software has great value and an inherent vitality to sustain itself, but were simply rejected by people with authority.

Did you know that Borg itself were put on death roll once, by Google's most powerful technical authority?

That history ruined my respect for many of the once-golden images. But in the end a great lesson that anything that is valuable, is also worth fighting for.

Just an example of how ridiculous an authority could become, if allowed to be unchallenged. Even in technical sphere, where is thought to be more rationale.


Because they’re largely dependent on internal, non-public infrastructure and libraries.

Open sourcing things is nonzero effort.


>software updates for a UI that no one asked for.

No one ever asks for these things even if they are needed. People complain at every single youtube redesign but when you look back at the past designs they look ugly compared to the current design.


"look ugly" is absolutely trivial (and subjective) in comparison to the loss of functionality and efficiency that these horrible redesigns have caused.


“Functionality and efficiency” is not always important. I don’t want YouTube to be maximally efficient, I want it to be an easy-to-understand and pleasurable watching/searching/browsing experience.


Sometimes. More often than not though, I suspect it stems from ignorance, hubris, and incompetence (and sometimes all of the above).

Basically people that don't know better, but think they know better, try to reinvent the wheel.

https://xkcd.com/1831/


Sometimes too. I think a big factor also is coordination costs. Working with a bunch of engineers outside of your company, with different goals and visions, is very time consuming. For companies who need to build-or-die, spending a bunch of time and money building relationships with people who are just as likely to act as roadblocks and gatekeepers just isn't an attractive proposition.

In this case it's not about new technology per se, but about new control and new ownership, which is an excellent way to reduce coordination costs.


It's also one hell of a lot easier to reinvent a shitty wheel than it is to learn a good one. Add that to the phenomenon of temporary hacks becoming permanent nine times out of ten...


Also known as sustaining innovation vs disruptive innovation.

When you’re embedded in a technology or business it’s harder to come up with something totally new, your perspective is often limited by your situation (everything is a nail when you have a hammer). As something brand new can also be damaging to the current business model, it feels uncomfortable.

Often you need somebody with the right mix of experience and independence (plus enough capital) to create something truly disruptive.


Is that what happened to Kodak? They were over reliant on developing film sales that they didn't want to invest in digital camera technology and then were left in the dust when it inevitably because mainstream.


It seems so. They played a major role in pioneering the technology but couldn't see or leverage it's potential. The reaction inside Kodak was to try to suppress and ignore the technology and hope it would go away and not affect their existing market. Other companies have successfully tried that but Kodak obviously made the wrong decision and paid the price.


"The most common mistake people make when evaluating new technologies is to focus too much on the “doing old things better” category."

While I get what they're driving at, I think there are good reasons to focus on this; you have some plausible chance of getting it right. The track record of people, even really smart people who work in technology, being able to predict the 'brand new things' that technology will be used for, is awful. Plus, if you want to get the technology up and running, it needs to get used for something old (better) first. If the early moviemakers hadn't made play-like movies first, they might never have learned enough about film to make movies that weren't like that.

Just because the early uses of a technology are doing old things better, but the later brand new things turn out to be a bigger deal, doesn't mean the 'old things better' phase was a mistake. It is probably necessary.


What are some examples in the DBNT category that created unicorn startups? I know lots of DBNT that happened inside of large companies (e.g. Bell Labs). What are some examples of cases where the companies were able to create massive market share from scratch?

Common examples of DBNT were really just DOTB:

Google -> not the first search engine

Apple -> not the first computer/music player/touch phone

Spotify -> not the first music streaming platform

Nvidia -> not the first GPU


Coming up with the idea of a portable personal computing device is easy. It's even possible to manufacture it (e.g. the Apple Newton). However it won't affect people's lives until it is quite polished. Similarly figuring out that people want to stream all their music is easy. The innovation in getting it to work, overcoming complex technological and social barriers is hard to explain to outsiders and less obvious. But it opens up these 'old things' to new people who couldn't access them before.

For another unicorn, clearly opening up new things to people: Oculus, though it wasn't the first virtual reality headset.


Oculus was the first to realize that we now have the technology to do VR right and forget about all the failed attempts from the 90s.


I don't think you can look at DBNT in the vein of first to try. I think what A16Z is saying is are the things you're doing common activities in the existing population.

So Google was just another search engine, therefore, it was DOTB

Apple, computers were not mainstream, the others can be debated. So Apple computer was DBNT.

Before Spotify streaming of any music you wanted (almost) was not mainstream, therefore they were DBNT

NVIDIA...I'm not sure I can comment, sure, they weren't the first GPU, but did they push the industry to be able to do things that couldn't be done before? Or were they just riding the adoption curve? Anybody able to answer that?


You can market your end results as an old thing, but if it's ever the case that you aren't introducing a thing new to that market, you have a speculative hot potato, not a good business. If the neighborhood already has five boba shops, the sixth one that opens is facing a lot of competition.

Which in some respects doesn't mean you have to be innovative at any point, just well timed and positioned. Underserved niches often love "more of the old stuff" if it brings back something they thought was lost. And bringing back the old stuff can be a gamble just as much as doing a groundbreaking thing; in that case, having a way of doing it better is a great derisk.


From a sufficient informational distance, that boba shop looks like "just another coffee shop". The line between new and better can be a little blurry.


Sometimes you just have to copy what worked for someone else in their venture and apply it to your own venture. I know that the world tells us that we need to be original; that we need to come up with new ideas to apply to our lives, careers and businesses; that we are special. However, I believe that sometimes, we can achieve tremendous success not by trying to come up with new ideas but by leveraging ideas that have already been tested by someone else.

The article https://leveragethoughts.substack.com/p/originality-is-not-t... provides a case study.


How do we define new things though?

Search Engine, Smartphone, Digital Cameras, PCs what else? All of the leader in those categories were Doing Old Things Better.

It wasn't about Doing Old or New. It was simply about the timing and Roadmap. Apple Newton had all the right idea, but the technology was far from ready and 15 years ahead of its time.

Netflix? Remember there was a War between Windows Media, MPEG and RealVideo before the days of Flash Video Streaming?

To me, doing new things is Invention. Doing old things better is innovation. At the end of the day, all it matter if there is enough Value creation, it seems to me DOTB or DBNT is the wrong question to ask.


Doing old things better & doing brand new things.

Always seemed to go together in some way. I want them both.

When I was a youngster, the NFL was still drinking Kool-Aid.

Just like all the college teams, except one.

They had this exclusive high-performance drink that made other teams jealous.

Scoring upsets could sometimes be blamed on the beverage, so they ended up having to share it.

And release it commercially, it flew off the shelf.

By the time I got to the University, they had really doubled down on the innovation thing seriously every year.

It was more crowded than ever and from day one I felt a certain particular pressure I've never forgotten.

We were supposed to invent something new, or new ways of making the same old stuff.

There had been a very strong trend in many ways which could make it more likely for the next Gatorade to be invented.

They were going to flunk the vast majority, sheesh, they were going to need to test all the students to find that kind.

Now I see that brand new things are much less common mainly due to limited opportunity, but doing old things better is how you maintain readiness for that kind of opportunity.


This all depends on where you draw the lines.

One could argue that nothing really new has happened since digital audio, video, computing and communication - all technologies that are 50years old.

All that has happened is that they got cheaper, faster, smaller. Most certainly, a16z has invested mostly in startups that just fall into this category and very little in building fundamental tech.


Find products with low NPS and high fragmentation. No need for an entirely novel idea. Even the best products we use everyday were very likely not brand new ideas.


I don't believe in 100% original work. Everything is build upon the past. Doing new things will always be some form of doing old things better.


Then there's doing old things better by doing brand new things, e.g. tesla and SpaceX.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: