It's kind of his "thing" to bounce around to different, interesting projects. He wrote the x86_64 spec while working at AMD (and was responsible for the K8s ala Athlon 64s), he played a big part in Apple getting their SoCs to best-in-class, and that's not nearly all he's done.
Incredibly accomplished man, I don't think he's going to stay in one place for very long. He'll probably come back to AMD at some point or another.
You really need to look a little more at Jim and his style. He has been around from the time of DEC (Digital Equipment Corporation). And has been a great engineer. But what sets him apart is not his technical competence, but his ability to organise great teams around him and get the work done. And he has always been the person who has been given the reins of companies in crisis. It is a marvel how he manages to get the product back up.
Connections. I bet he has a hoard of people willing to work for him. Even if pay would be low. They know it will succeed. So he can assamble an avangers team (pun intended)
Now that POWER is fully open, it will be interesting to see which RISC architecture becomes the x86 challenger (if either). RISC-V is eating up Intel (and ARM) from below, but POWER has the, well, power to compete at top of the line (before Zen 2 came out, anyway). The next gen POWER is supposed to be built on Samsung's 7nm process.
If RISC-V wins, it will be because it has a better developer/enthusiast/hacker story, but if so it will take a longer time to get there.
Either way, I hope for he sake of our infrastructure that an open standard wins out, and that we have enough competing manufacturers that we never have a repeat of Intel's utter dominance.
POWER is basically only top of the line though, other than the relatively rare NXP QorIQ chips that were used in e.g. AmigaOne stuff.
Most of the hype for RISC-V seems to be in tiny embedded stuff (e.g. WD's disk controllers) and academia (naturally). SiFive has finally made an out-of-order core, but I don't really see the market for big unix-capable RISC-V. RISC-V is really a royalty-free "MIPS in a trenchcoat", so expect it to be used where MIPS is used now.
ARM is everywhere, from smartphones (unfortunately Qualcomm dominance, but Apple is kicking ass in performance) to AWS EC2 (in-house silicon!) to massive HPC clusters (Fujitsu A64FX is impressive, they use HBM2 RAM (!) to make a SIMD-capable CPU into something almost GPU-like in a way).
AWS has basically ensured that ARM is the next ISA for servers :P
I'm certainly not downplaying ARM's domination in general, but don't take this bit for granted:
> AWS has basically ensured that ARM is the next ISA for servers :P
These hyper-scalers (inc. Microsoft, Google, Alibaba...) will take a little bit of everything, because they generally can always find cookie-cutter workloads impeccably suited to any architecture, but also as industrial diversification, for R&D, etc. Like AMD, the presence of some ARM CPUs in the biggest datacenters says little about market forces, what matters is how many actual FLOPS are effectively handled by each vendor. I'm positive Intel still has the lion share, and the inflexion point in favor of AMD (nearest competitor) would be 2023 at best, more likely 2025-26 assuming Intel eventually catches up in price/perf/W.
A reasonably heterogenous infrastructure is very good when you're an order of magnitude bigger than entire datacenters, or so I hear.
I share your concern that RISC-V is currently largely confined to the MIPS space; and indeed it's a totally different ballgame to break into ARM's space, let alone x86 (but I don't see why RISC-V would seek the latter, especially considering POWER is up there).
The open source community has been questionning RISC-V's openess lately [1]. While I wish they could correct that, maybe that could give POWER an edge.
I don't really know what you mean by “more accomplished people than Keller” but I sure do share your overview.
My point was more that economies of scale happen with killer products (e.g. how CISC won over RISC back in the 90's, it wasn't about tech but about product, and as you say it falls down to sales and marketing). Somehow a guy like Jim got involved in some of the best scalers out there (AMD's x64, Apple's A chips, Tesla's self-driving AI, AMD's Zen, now something at Intel with 3D stacking most likely, and next...?)
To me, David Patterson is the most obvious person who might qualify for that description. He (co-)wrote the book on computer architecture and how it relates to performance (Computer Architecture: A Quantitative Approach) and led development of RISC-I and RISC-II ("RISC-V" is a reference to this lineage; I believe other projects not officially bearing the "RISC" name were counted as III and IV).
I just hope the people involved in RISC-V are collectively able, as an industry, to deliver the kind of fabrication process backbone funded by great products 'agressively' marketed that saw the victory of pretty much all standards — CPU instruction sets very much included. A combination of very applied engineering + shrewd business mindset.
I think RISC-V is going to win for fundamentally the same reason that ARM won: it removes obstacles between product teams and processor cores. ARM didn't win by having the best instruction set design or the most advanced architecture, it won because licensing it was less of a hassle than licensing a different core/ISA. It's like ARM had a "buy now" button where everyone else had "contact sales" buttons. But extending that metaphor, RISC-V is "you don't need to click a button; it's already on your porch".
I assume that RISC-V still has a gap (relative to ARM) between the ISA/HDL levels and getting to an actual tape-out. But I think SiFive et. al. can tackle it.
* ARM wasn't directly in the manufacturing business. This meant they weren't a supply chokepoint when demand scaled by a factor of thousands. Just take the IP core and fab it yourself!
* They triangulated the market unexpectedly well.
There was a definite gap in the market between "performance at any cost" big architectures (x86/Itanium/Power/PA-RISC/SPARC) and "Draws no current, but gets winded running the clock for a VCR" (6502/Z80/68HC11/Atmel/PIC) little ones. For a long time, this was filled by on an ad-hoc basis-- one-off designs for game consoles, set-top boxes, and ahead-of-their-time smart appliances. Most of these were closed, single-vendor, no-interoperability systems where "ease of integration" and "off the shelf tools" didn't really matter, so we had all sorts of bit players.
It took the arrival of full-power handheld devices with third-party software support (PDAs and then smartphones) for a large demand to appear, and ARM was ready with a product line that fit the space well.
Compare the alternative, a world where we had had x86-based Windows CE devices and eventually iPhones. Intel/AMD/VIA would have been scrambling to put together the diversity of custom designs these devices cried out for. It would have required a massive respin of their architectures (look how hard it was to get Atom into a phone-ready state, and even then it did poorly) and then backlogged their foundries.
This! I also consult and have been to quite a few companies. Often I find that the most assertive person is who gets the ideas/perspectives pushed through, not necessarily the most competent. Sometimes the confidence comes from experience and usually that leads to ok decision making. Other times not so much.
Just lots of experience. Normally, people don't hang out in semi as much as in software engineering, and that itself is extremely short.
Semiconductor engineering was hurt badly when it tried to borrow work culture from the dotcom world, and treating engineers as disposables.
I myself know of other man of equal magnitude in the analog world. He worked in electronics for 24 years, most of it in China. He worked for one of my employer, where he was deservingly fired for failing a potentially multi-million buck project. Apple hired him, literally, next week. I have suspicion he had his hands on Apple's Air Pods.
It was tough to work with him, he has a reputation of "Shenzhen's Rob Widlar." Very few companies can make a working environment for him.
If you ask most people who have familiarity with x86_64, I'm certain they would use a different term than "genius."
Though the first step for you would be to work on projects you're more passionate about, and that are more technically challenging. A billing site (judging by your HN about section) won't challenge you in a way that will make you get better.
But the funny thing is he leaves behind a string of industry defining successes in his wake. You can't argue that he's not tremendously influential and successful in the traditional sense.
"The reason we're stuck with this shitty fifty year old architecture that's steadily gotten worse through every iteration," "Satan," "Evil Computational War Criminal," a few less nice terms.
If you ask a Sun employee: "Enemy #1."
Less sarcastically: "slightly above average."
x86_64 is a nightmare, and we're a decade behind where we could be because it was what the industry settled for.
That seems extremely unlikely. If a new architecture could offer that much advantage (a decade is massive in this space) surely there'd be some pressure to move. One could easily imagine Apple moving to ARM, for example.
Unfortunately, I think a much more likely situation is that x86 (/64) is only slightly hobbling things, and Intel are easily able to push past that with technical craftiness. As in many other cases, implementation trumps theoretical design.
Apple is moving to ARM, the most recent iPad Pro outperforms its laptop line (last I saw, at least), they've spent a fraction of what Intel, AMD, and VIA have invested into x86_64, and basically every major Apple leak mentions that they're investigating moving the Mac line to ARM in a generation or two.
A new architecture can offer that much advantage; x86_64 wasn't even the third most-performant implemented ISA when it was released, and every other ISA makes gains far faster than it. SPARC and POWER are still competitive with it despite having 1/1,000,000th the amount invested in them, and in just a few years and with comparatively nothing invested into it, RISC-V is starting to rival a portion of the chips (though not the upper line of them yet).
It "won" because of backwards compatibility, nothing more.
That's just not the case; Apple were already well behind in performance at the point x86_64 came out, due to them being stuck on POWER. They actively moved to x86, despite big compatibility issues, because of how much better they were.
The evidence is that architectures are just not as important as all that. x86 is clearly pretty bad in many ways, but clever tricks and microcoding have been able to overcome those issues.
That's just not the case; Apple were already well behind in performance at the point x86_64 came out, due to them being stuck on POWER. They actively moved to x86, despite big compatibility issues, because of how much better they were.
This is a complete misinterpretation of the above comment.
> Apple were already well behind in performance at the point x86_64 came out, due to them being stuck on POWER.
The issue was chipsets and peripherals, not POWER performance (which generally always beat x86 at the same point).
The problem was that the entire ecosystem was built around communicating with an x86. So, you couldn't get a Northbridge or Southbridge equivalent that was even remotely close in performance or power consumption to those in x86 space.
Unless you decided that you were going to take on everything in chip design, you couldn't compete. And Apple didn't decide to take on everything until Intel told Apple to pound sand and pissed Jobs off.
Absolutely false. They had to use liquid cooling in their top Mac at the time and it was still behind an opteron.
Then the intel core architecture came out and pretty much destroyed it performance wise.
The way I remember is that they couldn't get a PPC of the current (at the time) generation suitable for laptops - too power hungry, too hot. IBM weren't interested in supplying such a part so Apple were really left with no choice. It was a similar story of moving from 68k to PPC - the 68060 wasn't what they needed.
The 601 based PowerPC's were the first to be able to do 3D graphics on the microprocessor well.
The G4 based titanium Powerbooks were sufficiently better that they became iconic at a time that Apple wasn't regarded that well.
Sure, the G5 and up were disastrous, but the writing was on the wall well before that. Chipsets on the G3 and G4-based systems used more power than the processor and that only became untrue because the G5 was quite so poor.
Contemporary Apple marketing that was extremely proficient at putting Power in a favorable light (just sneak enough Altivec into every comparison to make up for the rest) + every single (of the then few) Mac users who parotted it religiously?
Those were different times for Apple. The ship was noticeably turned, but the storm not over yet.
By the time the G5 required massive liquid cooling rigs and was nowhere near ready for laptop use... even die-hard Mac users were not parroting the party line any more. For most, the Intel move was a welcome one, as that was also during the height of the move from desktop to laptop. There was almost no chance for a decent G5-based laptop, at least not one as nice and sleek as the G4s that were already in existence.
Amusingly we can compare it with a clean design that was supposed to replace x86: Itanium. That had the full weight of Intel and partners behind it and was such a flop that it's been completely ignored in this thread.
X86_64 is genius because it is the perfect example of the art of the possible.
It drove SPARC into irrelevance, forced Intel to adopt it instead of Itanium and drove PowerPC out of consumer computers. I'd love to have a nightmare like that on my resume.
Where could I read more about this analysis? Genuinely interested to understand how x86_64 is such an albatross and more details about what could be accomplished in its absence.
There's plenty of criticism of CISC in general, but that gets into flamewar status.
A lot of modern abstraction, vulnerabilities and inefficiency can be summed up with "x86_64 sucks to write for, so let's build a new (or recreate an old) abstract machine!"
In its quest to maintain fifty years of (near) compatibility with an architecture originally used in a calculator, the industry created a monster.
There's a reason Apple (and Sun and MIPS before it) was able to get competitive with Intel's chips despite only getting a silicon team like half a decade ago and using an architecture generally seen as low-performance: they ruthlessly removed cruft, cutting ties with backwards compatibility in the process.
Backwards compatibility is a scourge unto innovation and ease of use, as even Intel saw (IA64, for all of its faults, was better than x86_64 in virtually every way).
I have a lot more criticism of x86_64, and can get a lot more technical in that criticism, but this comment is already getting a bit on the heavy side.
IA64 had a glaring fault that trumps all of it’s advantages. It was explicitly a vliw design, so the user had to, or compiler, manually set multiple different kind of ops per single instructions.
Modern superscalar OoO is pretty much like that, except there is a piece of hardware internally that does it, almost like a hardware jitter. This freedom both means the magical compiler doesn’t have to exist but also it allows processors to have varying amount of execution units. See that you could actually use one more int alu? Just add it and even older software is able to automatically get the benefit.
Ironically the link you posted supports the opposite view: that x86_64 is ugly but that isn't really an issue. Quote:
in summary: x86 is ugly (and below is why I think so) but we don't care because compilers enable us to just forget about what ISA we're using. This is a good thing. This is the way it should be. But it's also an example of what the OP was talking about -- bad hardware design (in this case the x86 ISA; the actual hardware is quite good) not mattering because software is sufficiently good
That is admirable and pleasantly envious. Smart people like that being able to work on what they want at any organization they want makes everything better for everyone: the corporations, the users and the world in general.
While the core is great, I think the biggest contributors to the success are a few decisions outside of the design of the Zen core.
One is an investment back in the Opteron days for an efficient, scalable, low power, low latency, high performance serial interconnect. This is a key enabler for the new chiplet strategy that has paid off so handsomely for AMD. Now a single piece of silicon can scale from low end Ryzen through the highest end servers without having to divide their limited R&D budget across numerous different designs.
Additionally selling off global foundries enables them to pick the best fab per generation, something that Intel can't, or at least hasn't done.
So Intel had a set back on the fab where the difficulty (chip yield) is increased by their larger chips. While AMD can switch fabs (they switched from global foundries to TMSC at 7nm) and make much smaller chips. The top of the line Epyc chip has 9 chips inside (I/O + 8 CPU chiplets).
This positions AMD Particularly well for the future, they could now rev the I/O chip for DDR5, more memory channels, or any other performance tweak without having to re-engineer the CPU chiplets.
It also doesn't require all the chip related technologies to move in sync. The I/O chiplet is actually made on an older process than the CPU chiplets. If the PCI-e 4 I/O chip was late, AMD could have shipped a PCI-e 3 I/O chiplet. If mid cycle PCI-e 5 becomes a must have AMD would have much less engineering to fix it.
PCIe generations are the ONE thing AMD cannot easily rollback or advance out of cycle, since it’s used as the interconnect between everything on the die.
The Infinity Fabric (current generation that evolved from hypertransport) is a cache coherent serial network that can also do PCI-e (non-cache coherently). This can be swapped on a per serial connection basis as needed.
Seems like they could rev the PCI-e side without changing the IF side. The I/O chip has separate connections for off chip, so they could use PCI-5 there and not change the connections to any of the CPU chipets.
The PCIe and IF connections are basically two MACs sharing the same PHY, and the PHY is the hard part to speed up. There's not really a plausible way that AMD could end up in a situation where they would have reason to boost the speed of just one out PCIe or off-package IF. And the on-package IF PHY between the IO die and the CPU chiplets is easier to get working at a given speed than the off-package version (though I don't remember off the top of my head if they are still using two separate PHY designs on the current generation). So by the time they've got an upgraded IO die with PCIE 5/6/etc., it's pretty much guaranteed that they will be ready to bump up the IF link speed to match.
Jim is a legend. Led the AMD A8 which was the first real "oh shit" moment for Intel, and co-authored the x86-64 ISA. Bootstrapped the silicon teams for both Apple and Tesla.
I'm in Canada, and for the first time in over a decade, I decided I couldn't pay the premium of $3200 (taxes included) for a Macbook Pro with 512GB SSD and 16GB of RAM.
I bought a Lenovo 2 in 1 touch laptop with lots of ports, and an AMD Ryzen 3700U with RX Vega 10 graphics, for $700 (taxes included).
The AMD Ryzen 3700U is just as good as the 8th Gen i5. Plenty for me at home. And with the money I saved, I can upgrade my NAS.
This combination of AMD + Windows + Lenovo has finally pushed me to try the world outside of Apple.
I switched to ThinkPads last year (first a X250, now T470s) and this was my big concern too. I just sat down one day and spent an hour or so tweaking the various options for the trackpad driver until I got it how I was used to from Apple. I'd say it's 95% as good, and some things like dragging are infact easier as there are physical mouse buttons (as well as tap to click).
I have no idea why the defaults are so bad, but I've kind of got used to that as a Linux user ¯\_(ツ)_/¯
Any Synaptics multi-touch trackpad feels as good as Apple. Apple hardware is nothing special.
It's all about the software. Scrolling in Firefox on Wayland with the upcoming vsync patch (https://bugzilla.mozilla.org/show_bug.cgi?id=1542808) and OpenGL layer compositing enabled feels perfect. (Fun fact, I actually contributed the inertial scrolling on GTK patch to Firefox :D)
If you run legacy software and your trackpad scrolling is delivered to the application as emulated scroll wheel events, don't be surprised that it doesn't feel right.
Apple's trackpads are good (although overly large these days) but their keyboards are utterly horrific so it depends what you prioritise. If you mainly consume media and browse the web, then the apple's input is optimised for that case.
If you write or code or otherwise do real work, then something like a thinkpad has input better optimised around that.
Keyboards seems a pretty individual thing. I like Apple keyboards (even the touchbar ones), and think their standalone desktop keyboards are the best keyboards for programmers available.
I use a $250 mouse + a $250 mechanical keyboard as the laptop is docked... Pretty sure nothing compares to apples track pads. At the same time sitting hunched over a laptop on a desk is sub optimal too, many do it. Point is for significantly less money a non apple device is most likely gonna get the job done and not be a disaster hardware wise. Is the Trackpad really worth $2,000?
Btw not sure why you compare trackpad to a mouse. People who commute or work from cafes need good trackpad and apple ones are the best. The trackpad itself costs much less but i doubt people would use it as mouse replacement in office situations.
Ok, so after using for a couple of hours, the trackpad feels great! In fact, side by side they are identical in size with the Macbook Pro 2013. Clearly copied the size from Apple, but the feel is fine. The keyboard feels a bit different, but you get used it.
I also did not change any setting on the trackpad. This is out of the box.
It's shipping as I type this and should have it this week. I'll update this thread since I've been using a Macbook Pro trackpad for 10+ years now and give you a first impression.
AMD lags not because of hardware only. There is very weak support for drivers and libraries in software. Intel and Nvidia spend huge amount of money in supporting library maintainers and thereby creating a lobby.
For example, look into the list of supported GPUs in the Github repositories of popular machine-learning libraries PyCharm, Tensorflow etc. AMD and OpenCL is nowhere as compared to Nvidia's CUDA.
Also, the performance of AMD GPUs for deep learning is improving (50% in last 12 months through software alone). The Radeon VII ($600) is about the same performance as the Nvidia 2080Ti ($1100) - and the Radeon VII can be used in a data center (Nvidia force you to use the Volta GPUs - $9000 for a V100):
Has this resulted in any increase in uptake of their GPUs? I'm usually the one to praise The Economist for covering technical subjects without embarrassing missteps but this:
Its gpus—which provide 3d graphics for video games and, increasingly, the computational grunt for trendy machine-learning algorithms—go up against those from Nvidia, whose revenues last year of $11.7bn were nearly twice those of amd.
seems wrong to me. They've been doing perfectly well in HPC tasks in supercomputers. And their CPUs have been doing great. But as far as I'm aware most machine learning work that moves away from NVidia moves to custom silicon rather than AMD GPUs. For the sake of openness I'd very much like AMD to start becoming a more popular option though.
On the other hand, Nvidia's Linux drivers are blobs (that don't even support Wayland) while AMD and Intel actively contribute to Mesa, and treat their users well (or at least much better than nvidia).
There have been numerous situations just this year where I've stumbled on problem from having an AMD card, whereas Nvidias with their "blobs" have worked just fine. Gaming, video editing, even ML, they all seem to work perfectly on Linux with Nvidia, and AMD always has some problems. So at the very least this is contested. Nvidia seems much more supporting of the open source OS.
My experience in research taught me that NVidia makes a hostage-like situation. CUDA being a standard, it locked us into the proprietary driver, and overly expensive GPUs (or struggling to source used Titans) for double-precision computation. AMD stack not supporting CUDA at the time, their more powerful cheap double-precision GPUs felt like a forbidden fruit.
To top this off, the proprietary driver gave us permanent issues with OS integration, testing and deployment, and being distributed out of band, it was really difficult to get every developer on the same version. Each version of the GPU compiler had different and conflicting quirks, increasing the chaos of which developer/researcher used which branch of our software.
I would not recommend tying any serious work to NVidia hardware to any team.
You don't have a laptop with a nvidia optimus card, do you? It's a shitshow of never ending problems with nvidia, while every amd laptop solution works as intended instantly after os installation.
I have had exactly the opposite experience, issues with Nvidia cards / drivers that I couldn't fix. Over the last 5~10 years AMD support on Linux (and other open source OSes) have been much better. To the point that for some of my workloads AMD on Linux is faster than AMD on Windows.
There is one advantage of nvidia proprietary driver, you get support of new cards on day one, with amd/intel it takes few months to have drivers in good shape.
I'm pretty sure amdgpu has supported AMD's recent cards (from the past few years) before they hit the shelves. Same for intel. It's always fun seeing the commits adding support for chips that you can't buy yet. And I've bought some very bleeding edge hardware from both companies in the past few years, with good results OOTB on Linux.
It's crazy how amd still hasn't fixed their long long term driver issues. It's been a weak point for years and still is bad.
I'm look at their new cards but holding off because people are saying the only way to avoid crashes and black screens is to install on a fresh windows install. Yeah, no thanks.
Tbh at this point, I consider it a bit of hostile meme remaining from ATI times.
Driver support/reliability isn't something one could quantify in any meaningful way, as both companies have had plenty of botched driver releases [0] over the decades of their existence.
"For now, AMD’s resurgence is good news for consumers, IT departments, cloud-computing firms and anybody who uses software. Like any good monopolist, Intel charges a steep price for its products—unless AMD is doing well. Sure enough, Intel’s newest set of desktop chips, due in November, are some of its thriftiest in years."
I just built a new PC with Ryzen 3700x. I use it mostly for music production. Fantastic performance so far. 10 instances of Serum and my CPU doesn't even cross 15%.
Moreover, I feel like I'm buying for the future with Ryzen. Intel is going to change its architecture after the 10th gen. Buying a 9th gen Intel right now seems like throwing money at a dead end
From the game theorist point of view I think its better if one company is the underdog. Think about it, if one company is the underdog, one company has a lot to gain by competing, while the other has a lot to lose if they don't compete. Therefore we get competition.
Now, the more equal the market share of the companies the grater the risk and less the reward for competition... A better strategy would be not to undercut your competitor and instead divide the market share. Which leads to stagnation.
Do people here think it sound reasonable?
Edit: Mathematically the argument would be as follows:
Consider two company A and B. A has market share 'a' and B has b. n is the total market. Then a + b = n.
A's reward for competing will be n - a = b.
A's risk for competing will be a, (it's remaining market share).
A's will compete as long as the reward is greater than risk.
Take it for what you will, but an Intel engineer relayed to me (many years ago, in the aftermath of the Microsoft antitrust trial), that they attempt to ensure that AMD maintains a certain level of competitiveness as a hedge against antitrust. Sometimes that involves actually helping AMD via partnerships or technology sharing, if they are struggling too much - other times it means giving them a swift kick to the crotch if they are gaining to much ground. It may have been BS, it may not have - and much has certainly changed over the years... but...
AMD has always been nipping at Intel's heels, for 20+ years now, never really losing or gaining too much to pose a real threat. Yet we've seen how ruthlessly Intel will snuff out potential contenders (such as Transmeta, RIP), it does kind of make you think there's something to it.
This was the reason for me to still buy AMD shares when they were deep underwater. I considered them basically immortal, because if the company ever was in existential danger, Intel would have a huge incentive to stage some kind of indirect rescue operation (not outright buying, that would kill the goose, but something else that surely would prop up AMD share value), because the monetary value of AMD as an antitrust insurance was easily much higher than AMDs market cap.
Sold those shares way too early in hindsight, but still got a good return out of that thinking, and if AMD ever gets into trouble again, I won't hesitate to apply the same logic.
There's a simpler explanation for why Intel used AMD GPUs for a while: Intel's new integrated GPU design couldn't ship until they sorted out their 10nm issues, and their older design wasn't competitive. When Intel went shopping for GPUs on the open market, they could get them cheaper from AMD than Nvidia (though the HBM2 requirement was a clear downside). They actually paired AMD GPUs with both 14nm Kaby Lake and their failed 10nm Cannon Lake processors that had broken integrated GPUs. The short-lived Intel/AMD GPU partnership came to an end because Nvidia's lead over AMD got too big, but it was doomed to be cancelled as soon as Intel gets 10nm working.
Nvidia and AMD both make discrete mobile GPUs, and those are the only two options for offering better GPU performance on an Intel laptop when Intel's own GPU is inadequate. Nvidia's GPUs have for years generally had a substantial power efficiency over AMD's.
If it turned out to be actually true... I wonder sometimes if it makes a better example of antitrust policy actually working to some degree... or failing.
Ease of entry into the market is much more important than the present market structure. An underdog can become the market leader in a year or two [0]. The issue with Intel/AMD is that there are only something like 4 legal entities out of >7 billion legal entities on earth who are licensed to produce x86 chips. 4 is better than 1, but it is still a low number.
It is nearly impossible to maintain an oligopoly in a market that is easy to enter and questions of underdog/overdog become irrelevant. All companies have to offer a reasonable (value/$) proposition to customers of they go broke.
Sure! For actual x86, they already have. You can make a perfectly good 32-bit chip with SSE2.
The patents on the core of x86_64 will expire in a couple years. Even if you can't have the more recent vector instructions, that's pretty good for a lot of use cases.
each of those 4 entities has whole teams devoted to building up the warchest of patents + colluding on standards to keep the balls in the air indefinitely.
Possibly some kind of tacit collusion is more likely with relatively similarly sized competitors. But to me a giant vs underdog situation is usually worse because in the real world I don’t think the underdog usually comes from behind and wins. In fact I’m amazed by the number of times AMD seems to have accomplished this with a fraction of Intel’s resources.
It feels backwards... the closer the companies are, the more there is to lose from not competing... but you do get commoditization. I wonder what a world with AMD being the leader and Intel being the underdog would look like.
I understand this is an armchair discussion about economics, but this is wildly oversimplified. What does it even mean to "compete" in this case? Obviously if either intel or AMD stopped researching entirely then they would quickly fall by the wayside, as long as either intel or AMD pose a credible threat to being able to innovate.
I would say that your a and b could be like, levels of investment into researching. They'll research at some level, or risk falling too far behind, but can't spend too much as then they'll have to divert funds from other things like marketing or production, or just run out of money. They'll both likely choose to invest at a pace where they think they'll be able to match the other's innovation, but not so much as to overspend.
Yes it oversimplified. Suppose that a = b, i.e both companies has the same market share. Also suppose that when both companies has equal market share their innovations rates are the same, same price etc.
The model has two equilibrium both companies compete in which their market share fluctuates around a = b. and we get a sort of predator-pray model[1].
Both companies do not compete their market share stays the same.
What does it means to compete? It could mean many things like not putting lower prices, delaying innovations until competitor has release their own etc.
The issue with this is the scenario where either company deviates from the status quo. For example, company A decides to make a tradeoff: invest more into R&D at the expense of sales and marketing. If Company B remains the same, company A may suffer a temporary dip in sales, but in exchange, their product becomes better over time and they are able to take more than the original 50% of the market due to having a superior product.
What you're talking about really only works if both companies agree to stay stagnant and collude to keep the status quo as-is. To be fair to your point, this has happened a few times historically, but it is usually considered price fixing and is very illegal[0].
I think a clearer is example would be with telecom companies. Say A and B, are telecom with equal market share. A, could "compete" in an attempt to gain market share and install a gigabit bandwidth, but this will only cause cause company B to retaliate and instant gigabit bandwidth as well. Therefore the market share and revenue will fluctuate back to equal. But both companies would have lose the money involved in installing the higher bandwidth. Therefore if the market share is equal the best strategy would be "tic for tac" i.e wait until you opponent does something. Which has two equilibrium either constant tic for tact. Or waiting for the opponent does a move.
In the case when one company's market share is smaller than the other it is always better to "invest" or compete.
There is always an advantage to being first; if there wasn't, nobody would ever invest in anything new. Even in your example, the first company to bring gigabit internet to an area can secure some contracts that will still be in place when their competitors respond, so they get a head start when that does happen, which can lead to a longer term market share advantage if they can keep the momentum going. Even if that advantage doesn't last forever, it certainly makes them a lot of money in the meantime.
See: Uber vs Lyft, the iPhone (and, historically, many other Apple products), Coca-Cola, Netflix, etc.
I don't think that's the heart of the question. My question is is better for innovation if two companies have equal market share or if one has a smaller market share? I trying to argue that it is better if one company has a smaller market share.
Case 1: Netflix. Netflix caused innovation in the movie rental industry. But when Netflix first began it had much smaller market share compared to blockbuster. Would Netflix innovate again? Sure, but I doubt it would do anything revolutionary again. Most likely it would grow stagnant, once the new market stabilizes between hulu, HBO, disney, amazon etc.
Case 2: Apple. Apple was the underdog in early 2000s and that caused then to innovate, while Microsoft had grown stagnant. Today both Apple and Microsoft sort of have similar market shared and they don't really compete with each anymore. Microsoft shifted to cloud, and dropped windows phone. Apple keeps doing what they are doing with marginal upgrades to iphones, and mac. I don't really expected to come up with another "iphone" level innovation any time soon.
Case 3: Amazon. No big company is really trying to compete with amazon this days. I don't see Google or Facebook coming up with own online stores. It just not worth it to compete. While they are a lot of smaller online stores.
Case 4: Automobile industry. Sure they are new car models each year...but it is nothing revolutionary. Simple marginal upgrades over last year model. It was not until an underdog (tesla) tried to gain market share that we have seen any sort of innovation from them.
Most of the innovation today happens at smaller companies, and they eventually either succeed and become the next Apple and Google, fail and go out of business, or they get bough by the big companies.
> My question is is better for innovation if two companies have equal market share or if one has a smaller market share?
If that's your question, then the answer is obvious, isn't it? The higher the potential reward, the more motivation to innovate. If you only control 1% of the market, innovating might easily mean a 1000% growth of your company; but if you already own half, the best you can possibly do is double that. The upside just isn't there to justify big, risky plays.
That said, there is always a desire to grow, even if not by as much, so there is still incentive to not become stagnant and to continue making improvements to your products, even if marginal.
Yeah, when threadripper came out I believe it offered the same performance as intel I9 for about half the price. Why did AMD choose such aggressive pricing? Because they have the desire to grow. If Intel and AMD had the same market it would not make sense to put out a product for half the price as your competitor. So if Intel and AMD had the same share, we would probably still paying 1000+ dollars for intels I9.
Assuming they follow on with a credible laptop CPU, AMD is a clear BIG underdog winner, and Intel has some reflecting to do. It's a pretty unique moment in time.
It's been great for consumers, though. This ought to be a textbook illustration of how serious competition can drive a net increase in surplus, especially for consumers. I'd guess anyone who follows hardware, even from a distance, would agree. Finally broke the four-core "ceiling" intel imposed, too. I don't know if AMD will stay in front long-term (intel's got some nice stuff coming out on new nodes), but that doesn't matter. If intel is forced to put some work in and pull ahead again, still better.
Interestingly enough, I'm hoping that intel does the same to nvidia in the GPU space. Nvidia is still the choice if you want the best performance, and cuda is the standard. That might change (and prices might drop) if intel's cards end up being good. Fingers crossed.
Wouldn't we all fair better if those doing the duplicate work of researching for their individual companies instead were able to work together on a single design? If instead of two corporations wasting resources fighting each other for market dominance, if we all were only allowed to work for the government, tasks like these could be performed in a way that nothing was wasted and not only that but we all worked for the greater good of the people rather than the profit of a few.
Such a configuration would surely beat the pants off of any capitalist environment regardless of the regulatory structure imposed. Not to mention, by having goals that put the people first, would end up with a society of better served, happier people where empathy was the rule rather than the exception.
Intel's current situation with the 10nm node actually illustrates why having separate designs is a net benefit for the industry and the consumer. It was supposed to be delivered earlier, but due to a design/planning failure (https://www.tomshardware.com/news/intel-ceo-cpu-shortage-com...), it was stuck in development and kept Intel from rolling out their new architecture (Ice Lake) until recently.
What prevents a government-led project from hitting the same problem? Without serious competition from AMD, I'm quite sure there would be no price reductions and we'd be paying more for inferior products.
Nowhere near where it was in the 90's. All of the various RISC players regularly leapfrogged each other. There was a pretty mad rush to buy, too, because you often ran software that charged "per CPU". So there was a free software upgrade of sorts.
Oracle tried to beat that system with pricing tied to the clock frequency. That approach was kind of funny...they were lining their pockets with effort that other companies were putting forth.
The person in charge could have the same philosophy as you, and after seeing several reports of non-progress, invite some other people with alternative ideas to work in tandem.
There is nothing wrong with trying different approaches in parallel, but why does it have to go all the way from cradle to market that way?
At a certain point it would be obvious which design was the better one, and then the group who was working on the design that proved inferior could then join the more successful group and double their productivity. If it was possible to cut losses way ahead of the curve, imagine what could have already been achieved.
It’s still not obvious that Intel’s R&D is inferior (late, sure, but it hasn’t killed the company yet), and ultimately the market is the best judge of which is the best design.
It's not really superior or inferior but different approaches with different levels of risk. Intel invests lots of engineer hours tying their layouts very intimately together the the particular process they're using to get out every scrap of performance. AMD does their best to modularize their design at every level to not spend engineering hours they don't have and to let them move to whatever process node they need to.
When Intel's fabs were doing well they were dominant. When they ran into problems they were stuck in a way AMD wouldn't have been if TSMC had problems.
> ultimately the market is the best judge of which is the best design.
This is just bias. How can you know what the ultimate best judge of anything is? You would have to know what all the alternatives that could ever be conceived would be.
And I can prove you wrong more directly. If the market is the best judge of the of the best design, why is so much money spent on advertising? If a better designed product with less advertising faced a worse designed product with better advertising and always won, since "ultimately the market is the best judge of which is the best design", then there would be no point spending money on advertising at all.
Re "ultimately the market" most people would point to 20th C experiments with command economies here. The hope was as you describe, a benevolent leader trying different approaches until one was clearly superior, and avoiding all the cost & waste of competitors doing the same work twice, not to mention the waste of advertising.
And it didn't work very well, even on purely economic grounds. Cars are a prototypical example to think about, and they were spectacularly awful for decades. It turns out the benevolent leaders are good at figuring out what's good for their careers, and less interested in making useful products (meaning safe, reliable, fast, efficient, etc.) Even in the west there was too little competition, but it turned out that duplicating almost everything GM did independently in Japan was a great investment to be making.
Another way to say this: If 2 competitors each develop the thing, then in the worst case they spend twice as much money as in utopia. But we can't get to utopia. And in situations where there is only one option (be it enforced by the state, or just a monopoly) it's easy to go wrong by a much larger factor than 2. If you don't like cars, think about drug companies, or academic publishes, who seem happy to increase prices by 10 or 100 times if they can get away with it.
The problem is incentives. If you are the only chip in town, who cares if you go for a long lunch and aren't that creative in pushing performance limits? Capatlism creates winners and losers, and creates incentives for people to try their best to not be on the losing team.
The problem is not incentives,
but culture. From a view purely based on incentives, there is no reason to support your grandmother, for example, rather than kicking her out in the cold. Only our culture stops us to treat our closest family members this way.
If we expanded what we thought of as “us” from just the individual unit, however small that might be (the familial unit for some, just the individual self for others) to to include the entire country and government running it, there is no telling what we could achieve.
> If we expanded what we thought of as “us” from just the individual unit, however small that might be [...], to include the entire country and government running it, there is no telling what we could achieve.
Please consider that people have historically tried to achieve the Utopia you describe, where everyone is expected to put the State before the individual. It degenerates into totalitarianism, since forceful coercion and defining a "them" to the "us" is necessary for that to happen.
I much prefer the arrangement of putting more emphasis on the individual and money being wasted on duplicated r&d than what you seem to be arguing for.
An under-appreciated point. All heirarchies impose some degree of conformity, whether they are corporate or government. The market is in fact a very non-hierarchical structure, and provides the necessary space for creativity.
If you are able to recognize that diversity of opinion is desirable, why can't the person in charge recognize this, and create groups that work on different ideas in tandem?
Once the ones that are successful bear fruit, those that are working on the failures can be moved over to the successful groups. And instead of the current environment, where there is a lot of wasted effort allowing an inferior product to go all the way to market to languish there, the decision can be made much earlier than that, saving all those wasted resources.
That has worked at companies but it tends to not be a stable state for them long-term. Hewlett-Packard (HP) is the best example I can think of that started off as an innovator and then got re-orged into a walking corpse.
I think it is a result of human nature responding to the incentives provided by their company, their education, and culture. Most people, if hired to lead the work on an idea, will become upset or stressed if they learn of another group at the company that is also focused on the same area.
Companies can adapt their culture and organizational model to create an ecosystem in which diverse efforts can develop at their natural pace. However if that was easy to do well then there would be no need for companies to hire external design agencies.
At the scale of chip manufacturers such as Intel and AMD they probably feel that they are considering many diverse opinions, but each company has its view and once they decide on the path forward they will cut any staff that are not contributing directly to that path (or at least AMD will.)
The problem is how to adequately choose the person in charge. Adam Smith wrote a book that explains this pretty well.
Your idea is appealing in basic theory, for sure, but a few thousand years of experiment and more sophisticated theory of actual human nature disproves it.
I also suggest that those people charged with designing this new unified CPU will have a progress quota. And if they do not meet this quota they will be send to a labour camp. This will surely speed things up!
And just to be sure all is fair you will in charged of defining this quota.
In that case we would likely be stuck with AMD on bulldozer era or Intel now. No competition allowed thus the incubent will stay lazy.
The fault lies with human organizations in general. Everything will stagnate and there will be a boss ”We have always done it this way”. Only by coming from outside can one go around these issues.
In the 50s or so US was actually worried what you say might be the case and communist society would trash capitalism because factories would share innovations instead of competing with them.
For some reason the innovations didn’t really come.
Not that unique; the early Athlon64s were miles ahead of Intel, who'd gone a long way down the wrong road at the time. This feels very similar, but doesn't bode that well for AMD if they can only do this once a decade.
Those 2003 (AMD Opteron) - 2007 (Intel Core 2) years were some of the most successful years for AMD, and it continued for a little while after until they couldn't keep up anymore.
If the result of this is that AMD gains in power and market share for a few years and then the there will be another major leap in CPU quality due to Intel, I'd consider that pretty good.
Cpus are now lasting a lot longer... a top end cpu is likely to still be fast enough after 6 years, maybe longer. Catching up to Intel and surpassing them may be good enough for a whole decade.
Dunno, my Ryzen 7 1800X was relatively "top end" when I got it on launch day, and I'm already itching for 16 cores & faster RAM. I can't imagine using my T470p for work for five more years, it's already feeling horrendously slow.
If you don't care about electricity/thermals, midtier 95w 4core chips from 2011 are still plenty fast. If you do care about energy, then Intel gets better every year, with 5w tdp-down i7 chips coming out.
> AMD is a much smaller company with fewer resources.
To put that point into context, Intel's operating profit in just its most recent quarter ($6.55b) is greater than AMD's revenue over the last four quarters combined ($6b).
I'm not trying to be harsh; in both cases I think it's impressive what they did. Just saying it sucks a bit that in between Intel inevitably seems to get back in front.
I'm interested to see it but I expect it'll be just a noticeable step up, not a blockbuster. The process and focus on IPC seem like good things for power consumption. But it seems like there's a ton of work Intel's done to optimize sleeps/twiddle clock freq/etc. for the thin-and-light laptop use case, and I'm not sure AMD is close to catching up with that.
They could always surprise me. (TSMC does fab power-sipping mobile chips too!) I just don't have super high expectations.
It's a good thing that the next Surface is going to use AMD's CPUs. It's likely that (unlike other laptop manufacturers) Microsoft will spend a decent amount of time to work through sleep and power issues on the software side at least for Windows, which will then trickle down to other manufacturers.
Problem is that in the enterprise segment there is almost no AMD offering with the big brands so it is going to take a while to displace the Xeons. I looked at Lenovo workstations recently and their offering is 90% Intel.
OEMs don't list all configurations they sell on their sites, but they certainly want your money. We've asked for some ridiculous shit over the years (including sTR4 and SP3), and Dell has never failed to deliver a quote.
Supermicro is offering Epyc Server since the product launch of the first generation. We ordered a Server back in June 2017 and it bot delivered in November because the CPUs Werke sold out in the beginning.
Now with Rome it gets even more wide spread.
The Epyc CPUs are perfect for a Single CPU all Flash storage Server due to the 128 PCIe Lanes. Intel can't offer anything comparable at the same or similar proce point because you need to buy 2 CPUs
SuperMicro's ILO and rack mounting hardware were really poor a few years ago. Has that changed? Keep in mind the big brands have improved those parts since then, so unless SM made amazing progress, they're not a drop in replacement.
Works for me. I just want to be able to turn on, turn off, reset, get a video console, and reinstall the OS from an ISO.
Sure the software/tools could be nicer, but I've not been overly impressed with the competition. At least HTML5 has replaced the pain of ancient buggy java solutions that waste 5 minutes over a slow connections with sandbox permissions, java splash screens, and some awful client that's mostly VNC, but with a hacked up auth system.
I keep on hearing a rumor about a new HTML5 based RSA/OOB: It's one of those things that I keep on hoping for when I unbox a new SuperMicro, but, I've got 12K+ of these in the field and I'm finally at a point where I can replace them with Dells.
In about 5 years, maybe I'll replace the ones I have at home (admittedly, I really should just do a blade server at home -- it'd probably be less power draw!)
There's two kind of SM rails that I'm familiar with. The cheaper crappy ones, and the "tool less" ones. They are so nice I can mount rails from a single side of the rack with one hand.
More specific? That you think it's garbage doesn't really tell me what's garbage about it compared to others. I've only used supermicro's a little, and it seemed to do what it offered just fine.
I haven’t used it, but I assume it’s really no better or worse. Virtually every OEM licenses their OOB management controllers from Avocent - HP, Dell, Lenovo, etc. all use more or less the same platform with a custom skin and a couple addon features packed in.
From what I’ve seen, Supermicro is no different - they use Avocent hardware like everyone else. Everything just depends on what generation of controller they ship.
EDIT: as an example, the iDRAC on my Dell R210 II’s, the IMM on my Lenovo TD340, and iLO 3 on my HP ML10 are all the same generation of Avocent hardware with few differences. My Dell RX20’s have newer OOB modules and suck less to work with, but so does comparably new hardware from other OEM’s because they’re all made by the same damned company.
Just when I thought HP couldn’t be more stupid, they surpass all expectations; that’s so crazy.
They just sent my brother a $500 video card about eight months late after already replacing the whole computer that had a faulty video card. One hand doesn’t know what the other does in that corporation.
No idea on the desktop/workstation uptake. But servers seem available from plenty of OEMs. Supermicro, Asus, and Tyan have a wide variety of single and dual socket offerings.
What impresses me most is that the transition from Epyc Naples to Epyc Rome was pretty quick and doubled the performance on a wide variety of heavy floating point codes.
Are Xeons still running hot and sucking up a ton energy? My old workstation from a few years ago is a beast, but the dual Xeons it has runs pretty hot temp wise.
Well 2 years ago or so before the current Xeon scalable and Epyc chips it was relatively common to have server chips in the 65-95 watts. Often servers with 2 chips (mid range) would be $2,500 to $4,000.
Unfortunately the realities of Moore's law (or the lack of it really) means that the perf/watt stopped doubling. So in a single generation a mid range server went from 65-95 watts to 180 watts or so. At the same time server costs approximately doubled.
Higher end chips are now over 200 watts.
The good news is that while they consume quite a bit of heat that the physical size of the socket, heat spreader, and chip have significantly increased. So you still have to dump the heat, but it's not as hard (read that as noisy) as it was cooling the smaller (with higher heat density) chips.
AMD HD 4770, HD 6870, HD 7870, R9 290, RX Vega 64, all the graphic cards that I bought recently have been AMD. And thought my previous CPUs where Intel, my next CPU will be an AMD.
And I sure hope they keep it up. Intel was getting way too comfortable in its dominant position, and that hurts everyone (including Intel in the longer term). The best outcome for all is to have two companies with approximately equal marketshare competing on merit. There isn't much you can squeeze out of general purpose CPUs anymore, but I'd be quite grateful for the next 2 or 3x. And then they can start competing in acceleration hardware and GPUs.
Can you expand on what you mean by this? I don't understand. Are you saying people pick these stocks because of cultural reasons and they're actually not good investments? Are you saying that male-dominated internet subcultures have a cultural reason to pick these stocks, specifically?
It's just a weird mix of both stocks being techy/geeky, extremely volatile (though generally speaking on the upward trend), Elon Musk adding to Tesla's weirdness, and strange infatuation with AMD's CEO Lisa Su.
Tesla especially attracts a certain weirdness because there's a large cohort of people who think Elon is full of it and hiding massive fraud and are hellbent on exposing him (using $TSLAQ everywhere).
If you follow earnings season both stocks did extremely well, especially Tesla. Made and broke a lot of people.
Probably the second one leading to a bias causing the first one. Some tech/geek communities really like those 2 stocks. There are other male communities as well that are into other stocks. Even in tech the culture leads to some stocks becoming overvalued and others undervalued. AMD is overvalued while FB is undervalued because of the cultural feelings toward those companies.
AMD is worth ~40B today. FB is worth ~550B today. AMD supplies a product in growing demand while FB supplies access to demographics in growing demand. But the downside risks are not the same. It only took FB’s viral growth to end MySpace. User loyalty is far more fleeting than the market for CPUs. Within five years or your preferred investment time horizon, compare the valuations of each firm and see how your assessment stood the test of time.
Facebook has a plan A and a plan B for dealing with competition. Plan A is to buy out competitors to acquire users and diversify their brand (Instagram, Whatsapp). Plan B is to use FB's overwhelming resource advantage to copy their product (Snapchat). It's fair to say that FB is well aware of what happened to MySpace.
Any potential "Facebook killer" needs to circumvent both of these tactics. Not impossible but not easy either.
Zuckerberg went to Washington in September & October and took care of that. While he was busy with several other topics, you can be sure he leaned on the right members of Congress about TikTok.
The US has a very strong interest in every possible regard to protect Facebook's dominance and to kill or restrain TikTok by forcing it apart from Bytedance or burying it through the app stores.
If it gets separated from Bytedance it's either toast or it ends up in the belly of a US giant (or maybe even Spotify depending on the valuation).
What does this even mean, haha. AMD is an also ran competing on price. It has no competitive advantages, no moat, weak brand. That is has a 180 PE is an absolute joke. It's so high purely because the culture really dislikes Intel/nvidia making so much money on their pricey products and wants a viable competitor badly. This leads to them emotionally becoming invested into the alternative even though objectively it's not a fast growing nor defensive business.
FB on the other hand. Incredible profit growth year after year. Powerful network effects leads them into crushing new markets all the time. Huge profits from their monopolies lets them easily take cool risks. The whole comment about MySpace/social media is exactly what I'm talking about. Males who are nerdy are extremely dismissive of products that are mostly used by women or other people from different milieus. People have a hard time pulling for companies that they don't see succeeding, it's a cognitive error leading to imagination inflation. Know your biases.
> It has no competitive advantages, no moat, weak brand.
Competitive advantage: it has more experience in GPU's than Intel, it is more flexible in terms of fab tech because it doesn't run it's own fabs, it can switch easily to a different fab.
Moat: Seriously? High-end chip design is one of the hardest industries to get into, you think some startup can just enter the market?
Weak brand: It's very well known in the whole world and has no negative associations, I don't even know what you mean with this.
Don't forget they have the competitive advantage of chiplets which enables higher yields, and the moat of the x86/AMD64 cross licensing agreement with Intel. Intel and AMD are effectively the only organizations that can legally produce modern x86 CPUs even if others had the technical knowledge to do so.
They're easy to predict if you keep an eye on them. For example, their announcements at CES earlier this year caused a 20% jump in about 3 days. Very good if you know that they're going to come out swinging.
Index funds and bonds are there for stability. If I'm going to pick an individual stock it's because I think I understand something other people in the market don't.
ROCm doesn't completely suck with the Radeon VII, which is a Radeon Instinct MI50. Deep learning is not my day job and I'd like to avoid supporting NVidia's insane prices for DL-capable cards, so I've been dealing with the performance hit, and only using the R7 for DL tasks then switching it off when the power isn't needed. The XFX Radeon VII is actually on sale for Newegg for $569 so it's a lot of power (and 16 GB HBM2) for that price.
Agreed. The Radeon VII is currently the best price/compute GPU out there for deep learning. It's performance on RESNET-50 is about the same as the 2080Ti -
That's only theoretical. Try to use ROCm on latest frameworks or on external models that write custom CUDA operations/losses. Basic stuff might work in a more complicated way than on NVidia, advanced stuff is guaranteed to either not work or work in a couple of months when it lands into ROCm.
Radeon VII is a beast for FP64 computation, if you do simulation or heavy computations that require supercomputer-level of precision, then grab one while you can, it's the best price/performance of all GPUs on the market.
However folks, please don't follow the advice about using for it Deep Learning if you want to actually have Deep Learning business in any way.
Nope, just grab yourself a RTX 8000 and be able to train latest SOTA models (albeit slowly). Titan RTX is already insufficient and nobody else is in the game for actually owning DL hardware :(
Can I ask -- What was your line of thinking that led you to find this out?
I'm the kind of guy who could probably figure out how to do this if it was, like, given to me as a task. But never in a million years would I just stumble across this.
I don't like it when different websites look very different. So in my primary browser (Firefox), I only allow sites to use my preferred sans-serif, serif, and monospace fonts. (In the options page's font dialog, I unchecked "Allow pages to choose their own fonts, instead of your selections above".)
I noticed that all occurrences of AMD are lowercase, so I noticed that it's set to lowercase. Opening in Chrome, I noticed that there was a giant banner on the bottom covering 1/3 of my page, and another on the top which closed when I closed the bottom banner. In Chrome, I noticed that AMD was written in small-caps with font MiloSCTE, and the rest of body text was using MiloTE.
I assume the previous poster blocks all web fonts as a matter of course (to save bandwidth? speed web page loading? reduce vectors for tracking?)
Anyhow, people should ideally be using small caps a whole lot more. Using all caps for abbreviations is much uglier and less legible, especially in texts where many abbreviations appear.
Often times people will upvote the topic / headline as something interesting or noteworthy or because the discussion in the comments is interesting. I’m sure a double digit percentage of people are not even clicking over to TFA, and then another double digit percentage click and then bounce back to the comments (after hitting the paywall) for the discussion and may still upvote.
I don't think so. Because now TSMC has the highest transistor density. It is more that Intel fumbled 10nm so badly that others have passed them. Intel's 7nm had better be good and timely or they're in some serious trouble. TSMC's 5nm is starting early next year and with 1.84x scaling.
But nm are marketing terms now; TSMC's 7nm is said to be equal to Intel's 10nm.
Nodes are still shrinking, but not at the rate implied by their nm names. In addition, thermal constraints mean they can't actually be used at their theoretical capacity.
Intel's 10nm is finally shipping now, according to Intel, but it seems small scale yet. TSMC's 7nm has been shipping for over a year, in very ubiquitous devices (iPhones).
Congrats to AMD, but I'm still very pessimistic on their long term prospects. It seems like Intel's advantages come from a system that produces improvements over years. You can see this just in their R&D spending: Intel spends nearly 2x AMD's revenue just on R&D. Whereas this development from AMD was thanks to Jim Keller (who now works at Intel)... It was a one-off event.. and once they've extended it as far as it'll go.. then what? Unless they develop their ability to innovate (they've had issues keeping top talent at the firm), this will probably be another one of AMD's boom and bust cycles.
Intel decided to invest more heavily in share buybacks than R&D as of their recent earnings call.
But that’s only half of the story. They need that R&D budget because they have the (massive and growing) expense of building and upgrading their own fabs, which have undergone a series of costly mistakes in the last decade. I wonder what % of intel’s R&D budget is actually comparable to AMD’s if you exclude the amount poured into fabs—betting those figures are much closer despite enormous differences in market cap. TSMC, who along with GloFo fab the AMD chips, is basically all-in on R&D investment and taking on debt to facilitate the construction of the most advanced fabs to date. And their prior investments in 7nm have scaled rapidly and panned out well. I think it was the fastest ramp for a node shrink that they’ve done.
Oh and Keller is definitely smart but you imply that he’s got a monopoly on talent in the semiconductor industry. There’s no way that’s the case lmao
Intel spreads their R&D revenue across many more product categories than AMD does. In addition to CPUs, they've got their new GPU, their SSDs, their entirely new form of NVRAM, wireless networking, and many other projects. And they run their own fab, which is a huge investment that AMD doesn't have to make.
AMD just makes their CPUs and GPUs. Also, Intel's CPU designs tend to be much more custom than AMDs, trading more engineering effort for a bit more performance.
you gotta give credits to the team as well. It wasn't just Jim Keller. btw this kind of architecture can give AMD a few years to catchup. if they can keep the momentum, they have a very good chance to be relevant again.