Hacker News new | past | comments | ask | show | jobs | submit login
Don Valentine has died (sequoiacap.com)
284 points by gatsby on Oct 25, 2019 | hide | past | favorite | 26 comments



Don was a pretty amazing guy. He was the chairman at NetApp when I was there and told me I was crazy to be pushing NetApp to use an untested AMD processor (Opteron) for the high end filer of the time.

What resulted was a solid discussion that ranged from how reliable AMD was to how important Intel was to Netapp, and how to measure the "betterness" of one technology over another. I really respected that he could be opinionated and listen at the same time, always willing to cede to a well reasoned argument about how he might be wrong about something. He was also really good at poking holes in an argument so I found myself on the defensive a lot!


Great share. I remember the inflection point that Opteron was. Essentially, it killed Sparc, PA-RISC, Alpha, Itanium, etc. That was a unique moment in time. Hearing a personal anectode between someone relatively junior at the time, and a legend, is very interesting to me. It was, really, the moment Linux/x86 "won". That 64 bit hurdle defined where we are today.

At the time, it inspired me to make a pitch that "RISC was dead", despite it's technical superiority. Kind of a VHS vs Betamax moment. My pitch worked out, and was probably the defining moment of my career.

I'm pretty happy that AMD is, again, riding high. Underdog stories are more rare these days. And you had a not insignificant part in that.


> At the time, it inspired me to make a pitch that "RISC was dead", despite it's technical superiority.

Is it? The microarchitectures of the big MPUs is essentially RISC -- as it always has been, but microcode isn't written 100% by scratch any more.

At the compiler level it's kind of true, and for what turned out to be generally good reasons such as compilers aren't as smart as Radin & co thought they'd be, or concomitant ideas like delay slots turned out to be incompatible with advances in memory architecture. So in that regard I'd say that it isn't technically superior at all, which was a surprise to me and many may people.

However CISC evolved too. The original CISC architectures that RISC was a reaction to had lots of features for programmers (think VAX string processing or function call instruction!). Nobody writes code like that (except for MPUs); those residual instructions are in fact much slower than compiled code because Intel et al won't pay anyone to optimize them. Instead the focus (apart from vector and some housekeeping) has been on writing instructions that communicate better to the CPU's instruction interpreter and scheduler what the programmer's overall intent was. And structured in a way that is easier for compilers than for humans.

So which won?


The CISC just became RISC on the inside thing is greatly overstated.

CISC always decoded to simple more or less single cycle ops internally, that's how microcode works. The RISC shtick was to get rid of that decoding into simple ops in the first place. Originally that didn't make sense because those ops' fetch bandwidth would be competing with data bandwidth. But notice how RISC popped up the same time as ubiquitous instruction caches? They solved the same problem in a more general way; the I$ means that your I fetch isn't competing with data on hot paths. You can also see this in how all of the early CISC archs would have single instruction versions of memset/memcpy/etc. The goal here is to get the cycle by cycle instructions out of the main bus data path by sticking them in microcode.


> CISC always decoded to simple more or less single cycle ops internally, that's how microcode works. The RISC shtick was to get rid of that decoding into simple ops in the first place

Having written microcode myself as a wee lad I would call that a gross oversimplification. And the micro machines inside a modern CPU are themselves fiendishly complex; my point is that microcode is no longer completely hand-crafted which is the only level to which the “RISC survived” argument might (IMHO) hold.

For the reasons above I also don’t agree with your final point.

I have to say I am not familiar with the microarchitechture of any of the early large-scale single-chip CISC CPUs (my microcode forays were for much larger machines) so we may be speaking to some degree at cross purposes. But again I think you mischaraterize the 801 and it’s descendants.


I've written microcode too, and there's two type of micorcode. Vertical and horizontal.

Horizontal is your wide microcode like seen on what I programmed, the KB11A CPU inside a PDP-11/45. It was somewhere around a hundred or so bits wide, and you could pretty clearly see "ok, these five bits just latch into this mux over here, these over here", etc. in the micro-architecture. I've seen between 96-bit adn 256-bit wide singel instructions here.

Vertical microcode is what you see in designs that the 801 was trying to get away from having a full CISC decoder for. Much smaller fixed length instructions that represent higher level ops, and are what RISC was trying to get rid of mainly.

The non ascetic CISC machines would normally have at least two microcode ROMs: one in the decoder, and at least one in the backend, maybe more depending on how the separated out their execution units.

So for instance 68K had: * Decoder microcode of 544 17bit instructions * Execution unit "nanocode" of 366 68bit instructions

An ARM1 had: * No decoder microcode (but 32bit wide, fixed width, aligned ISA instructions with a I$) * Execution unit microcode of 42 36bit instructions


Opteron never killed SPARC. The changing market dynamics of Cloud did.


I was at Sun when SPARC was invented. And I think Sun killed it.

Every time someone started trying to take advantage of the "open" nature of SPARC, Sun would out compete them, fairly or not. It was a pretty open secret that Sun had hitched its wagon to the "best" processor for running Unix servers. As I told the leadership after I left that Sun was becoming DEC. They were retreating to the enterprise data center, nobody was buying workstations any more, and Solaris was not getting any more open. I suggested they build servers with the Opteron too :-).


The changing market dynamics of cloud were pretty tied to cheap 64 bit servers. Opteron forced that on everyone. Intel would have gladly dragged it on x64 and sold high margin, "not cloud friendly" Itaniums.


Did he yield?


Effectively yes. He did let me know that if it didn't work out that I would bear the consequences :-). By that time in my career I knew that you're only as good as your previous decision, good or bad.

It became NetApp's best selling product at the time. I don't deserve any credit for that of course, the folks to did the work to get it out the door and sell it get the credit. My job was to do what the Army engineers do, land on the beach and clear the obstacles between the beach and the objective so that the main force can do what they came for.

When I interview managers I ask them why they want to be a manager. It is not uncommon for them to say, "Because I want to be able to make the right decisions." And I ask them, "And if they are the wrong decisions, are you prepared to be told to leave?" It is a good litmus test.


For all the times "nobody ever got fired..." is used when discussing technology, this might be the first time that I read about a manager directly warning you why you might get fired if you don't buy, errr, IBM :)


That's really great. The reason I asked is because I've seen that tactic used just to get people to let off steam and then the manager would push through their own vision regardless of whatever was said. So I'm really happy to see that he not only took input from you but was willing to commit to it as well over his own opinion.

Agreed on the management interviews, any kind of power should always come with accountability in case it is mis-applied.


If you want to learn more about Don and how influential his work has been, I recommend listening the recent Acquired episode about Don and Sequoia [0].

[0] https://www.acquired.fm/episodes/sequoia-capital-part-1


Great episode. Will listen to it again on a long walk today.

I’ve always sought commentary and quotes from Don Valentine to help me learn more about being an angel investor the same way I have have always Warren Buffett’s commentary and shareholder letters to learn how to be a better long-term investor.


Just listened to it. He’s a giant!


One of his great talks at Stanford: “Target Big Markets” https://youtu.be/nKN-abRJMEw


I loved watching this talk — After watching this I gained a new level of appreciation for the VC model they pioneered.


If you then also listen to the episode on Acquired, it’s eye-opening on how he / sequoia decide on making investments.


Just listened to it[1] twice; Thanks for the recommendation!

[1] - https://castro.fm/episode/bwHpe7


I first recall reading about Don Valentine in the October 1982 issue of National Geographic on Silicon Valley.

An excellent snapshot of Silicon Valley and a glimpse of Don Valentine’s role in it.

http://blog.modernmechanix.com/high-tech-high-risk-and-high-...

Risk/Venture Capital, despite valid criticisms of it, has had an outsized effect on our world in the last 50+ years.

And Don Valentine played an outsized role in it.


Really a great, supportive guy, and very kind in a pitch meeting, even when you are completely losing it. In general I've found the sequoia folks to be a class act (with a couple of, to me, egregious exceptions)


“ his high school friend, Steve Wozniak.”

This can’t be right.


That line also gave me pause, but I think it was meant to say that Woz was Jobs’ high school friend (which is also wrong, but much less wrong).


Rest in peace Don!


[flagged]


Very insensitive....




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: