For machine learning Tensor Cores in the upcoming Volta GPUs seem much better idea, delivering 120 TFLOPS. Although the price would be a multiple of 400 USD, for sure.
> For machine learning Tensor Cores in the upcoming Volta GPUs seem much better idea, delivering 120 TFLOPS.
Yes, but the real-world benchmark numbers nvidia published show a much smaller speedup than one would expect if one looked only at the 120TFLOPS number.
This is because (a) "tensor core" is just a fancy name for "fast 4x4 matmul instruction" -- it's not applicable to everything you might do on the GPU, and (b) memory bandwidth did not increase commensurate with compute speed.
"graphics memory" is probably not the right term here. While the Pro SSG does have 2TB of storage on board, that comes in the form of a NAND flash, PCIe SSD (from Samsung IIRC). While fast, this SSD provides single digit GB/s bandwidth at most, compared to the hundreds of GB/s of bandwidth that the DRAM-based GDDR5 or HBM provide. When we say "graphics memory," we're almost always referring to the latter, and never see more than maybe 32 GB of DRAM based memory on board.
Yeah, that seems like false advertising. I believe that much true HBM/GDDR5 would have to cost tens of thousands of dollars. So I was skeptical. That said: if I could get a card with that much real GDDR5 on it, I'd consider paying whatever it took.
GPUs are generally not latency optimized anyway - if an app is sensitive to tens of microseconds then it belongs on the CPU, broadly speaking.
AMD's advantage here is getting drop-in throughput, by switching the SSD directly to the GPU. Thus, the onboard SSD doesn't waste host PCIe lanes (4 lanes, typically) that are getting hard to come by in regular desktop computing systems.
Who is this for then? I should just go Nvidia for my gaming rig?
The RX 480 last year had an amazing price/performance ratio. It was a very exciting product for the mid range gamer. It seems with Vega they are offering slightly worse than Nvidia's high end for slightly less money.
Better performance at the same price would have been compelling. Slightly worse at a bargain price would have been compelling.
Plus, you won't be able to buy these anyway because of mining. This is just sad. Maybe they should just remarket themselves as a mining company.
Slightly worse for slightly cheaper could still be attractive. Factor in prices for displays with freesync display vs displays with gsync, and the price difference to Nvidia get a a lot bigger, in favor of AMD. But it depends on whether the cards will be available, and which performance will be reached.
Seeing as the Vega 64 "starts at" $500 and I was able to pick up a 1080 TI for about $700 I don't see who the target audience is. If you are the part of the market that is willing to spend $500+ just for a new graphics card then you likely aren't going to get the subpar card just to save a few dollars.
If you already have a G-Sync display, you're kind of locked in. I had super high hopes for Vega (if only to apply downwards pressure to NVIDIA's pricing), but ended upgrading from a 980 to a 1080TI a couple of months ago as the launch was dragging out.
With the numbers I'm seeing, I'm glad I chose the TI, and didn't wait. I don't see NVIDIA reducing their pricing over this.
This is the biggest disappointment for me, blower Nvidia cards are basically unobtainium because of miners, I was hoping we'd get a second source for the Ethereum bubble boys.
There are very few titles that will run 4k at 60fps. I had dual RX480 (8GB variant) in crossire and could not get witcher 3 to stay over 30fps. Even overwatch would not stay at 60fps.
Because you didn't tweak the settings. My RX 480 gets around 90 fps in Overwatch at 4K. All you need to do is to disable dynamic reflections and drop effects quality to low. (Keep models/textures at maximum.)
Reviewers: make sure to test mining performance of these cards. Not that we care about that, but to give us an idea of whether it'll be possible to actually buy these things.
AMD should just run their own mining rig at this point. Create a large multisocket board and plug all those GPUs in for a week of 'burn in' mining coins before shipping them out.
But will it work, or will miners gladly pay the extra $100? AMD would love that -- miners won't redeem the games or the discounts, so that extra $100 is pure profit.
Is that how those bundles work? AMD only pays when the game gets activated/redeemed? I always thought they just buy a bunch of game licenses and pay upfront.
Miners might want to sell the games/discounts to recoup their losses. After all, if you get a $50 discount on a $200 monitor then you can just re-sell it new for $200 and get your money back.
Which is US only. The only reason to buy was the bundle, which is a good deal. Same performance as 1080 at the same price with double the TDP and will have limited stock. There is no reason to buy Vega as a gamer.
Vega is a failure aka bulldozer 2.0
Same performance as 1080, a year later with double the TDP.
Only 30% higher than the Fury which was their high end card from 2 years ago.
This whole product launch was terrible from AMD laughing at Nvidia with "Poor Volta" to "blind tests" to no live streaming of the SIGGRAPH event.
The GTX 1080 is $550; Vega 56 is $400 and Vega 64 is $500. With decent cooling, the vega gpus can be overclocked to give significant boost. Also the freesync monitor is cheaper than gsync. Vega56 + FreeSync will save you around $350. That is ~ 25% saving if you are going for a 1.2-1.5k$ rig.
With decent cooling, the vega gpus can be overclocked to give significant boost.
I don't think so. The article touched on the fact that the watercooled one is already seeing a disproportionate ramp-up of power in order to ramp up clocks. That says to me that it's beginning to eat in to the headroom of the card and will soon begin to run up against its voltage wall. That suggests to me that they're already pushing the card pretty hard to get the stock performance levels, which doesn't leave a lot on the table for overclockers.
The GTX 1080 is actually $500 MSRP. The Vega overclocking is surely inferior to the GTX OC. The OP is unfortunately right I think, trying to enter the Radeon ecosystem at this moment is foolhardy.
Vega is already drawing near 300W, and is so high up on the voltage/frequency curve that even a measly 5-10% gain in core clock can easily cost over 100W more.
Here, AMD is once again (see last year's Polaris) a victim of the inferior GlobalFoundries 14nm LPP process. TSMC 16nm would have been much better in perf/W, but sadly a very restrictive wafer supply agreement locks AMD to GloFo for the time being.
Too late, raven ridge with integrated vega taped out a long time ago.
One desktop gaming card line-up operating in the wrong region of the efficiency curve doesn't instantly turn an entire architecture into house fire material.
Whatever the initial price difference there is will be lost in power consumption over the course of the first year of heavy use or 2 years of moderate use.
And after the price parity, you're just stuck with a worse card.
Vega 56 is faster than the 1080: 15% faster in SP GFLOPS, 130%(!) faster in DP GFLOPS, and 15000%(!!) faster in half-precision GFLOPS (Nvidia artificially cripples theirs). All comparisons made at base clocks.
"double the TDP"
Merely 17% higher: 210W (Vega 56) vs 180W (1080).
"Only 30% higher than the Fury"
And Nvidia's top 16nm GPU (Titan Xp) is only 39% faster than their top 28nm (TITAN Z) released... 3 years ago. The 28nm→14/16nm transition was hard for everyone.
The blind tests show something useful - consistency of throughput, which people notice, regardless of how benchmarks work.
Benchmarking software should do more to provide a clean and meaningful representation of the distribution of performance - people care more about a card's worst case than its best or average.
Another significant consideration - as alluded to in the TensorFlow comment - is the compute performance.
Although compute performance is not yet that important from a gaming perspective, I would be rather surprised not to see some games start to utilise it for engine functionality - especially after Vulcan and OpenCL merge.
Wait to see the performance of the card as a whole, and consider the implications for the next generation before you declare the card a non-starter and evolutionary dead-end.
1) Blind Test - That's true, but still people who buy check benchmarks, don't forget AMD did exactly that with Bulldozer and we know how that ended up....
If AMD thought that they had a winning horse they would not hide the performance numbers, they would PR that like crazy, like they are doing with Ryzen and ThreadRipper.
2) I'm all for AMD to kick Nvidia's ass in compute and end cuds's reign but it won't sell gaming cards. Vega FE (A 1000$ card) is out for a month and I haven't seen anyone benchmark it and show that it's better, Even if games will use it there is a heck of a difference between building the model and running inference.
It won't sell gaming cards yet. My argument was that comparing underutilised hardware to that with fundamental unresolvable physical bottlenecks seemed to be a little premature.
The next generation of games could potentially utilise compute functionality, especially if Microsoft and Sony get off their arses and push the capability on consoles.
(That said, porting such software to a PC environment would not be withouut significant difficulty, given the vastly different memory configuration.)
I'll be hugely disappointed in the industry if we don't see some decent Vulcan based game engines in the next few years.
Benchmarking software should do more to provide a clean and meaningful representation of the distribution of performance - people care more about a card's worst case than its best or average.
You should check Tech Report's Frame Time Analysis:
The single frame that takes the longest time to render won't normally be an issue, but a cluster of frames each taking significantly long than usual will result in a noticable stutter. The stats should try to take into account not only the low frame-rate but whether that rate has been held long enough to peturb a human observer.
Only AMD can really say whether Vega turns out to be a failure or not. We don't know what their goals or constraints were. In the end, if it has a performance/price advantage over Nvidia's (year-old) offerings, then it will be at least a relative "success" from a consumer pov.
It seems to be bringing some technological advancement too - using GPU memory as a cache seems like the way forward. Naive calculation gives me a couple of MB to store a single 1080x1920 texture. Current game usage is approx 1000 times that. It takes more than one texture to build a scene, but there seem to be gains if more textures are loaded on-demand (or slightly earlier).
With the failure of RX Vega to meet performance expectations, I fear Nvidia has free reign now to continue their dominance in the areas of deep learning / machine learning.
I was really quite optimistic, especially with their recent push with the ROCm platform that AMD has been building up. But if the hardware can't back it up, it means little in the end. It seems like we'll have to wait until the next generation to see if any viable contenders will show up. Disappointing to say the least.
Even in terms of raw compute, the current Pascal iterations like the 1080 Ti, Titan Xp, etc. perform better than the RX Vega.
In addition to a significantly lower TDP par for par and an established developer environment that currently is unparalleled, much to the frustration of myself and many within the machine learning community as Nvidia has chosen to keep their software closed source, the RX Vega release to be very blunt, is a definitively disappointing result regardless of the distinction between gaming or computation.
I think there is a silver lining somewhere, however, in anticipation for the next generation releases of AMD in a year or two. AMD's investment into their ROCm platform will hopefully continue and iteratively, the open source option will slowly continue to improve. By the time AMD is ready to release their Navi architecture in 2018/2019, perhaps there will be a chance to bring things back towards an equilibrium.
Another contender I see in the nearer future perhaps that might be able to topple Nvidia's current monopoly on deep learning, is Google's investment into their TPU architecture. In addition to fully owning the development of TensorFlow, with the resources that Google has at its disposal, I could easily see the company investing in building out an full end-to-end alternative that takes away the current stranglehold that CUDA has.
Wherever the competition comes from, I hope that it comes sooner rather than later. Closed source single option monopolies that Nvidia currently has managed to position itself into only serve to hurt the consumer.
I can perfectly understand if you disagree with me and am happy to discuss. But I don't feel like anything in my post was misleading or incorrect, so I'm confused as to why I'm being downvoted?
The GTX 1080's current MSRP is also $500, though, and it's been on the market for over a year already.
Also, those numbers only represent the "minimum FPS range" for select titles. While minimum FPS is important, the picture is incomplete without other figures such as average FPS.
It's marketing's job to shine the best light possible on their product, of course, but it says something when Vega still doesn't look terribly impressive despite that effort.
And 78 > 76, in favor of Nvidia. But they don't list average or 99%ile performance, and manufacturer-provided benchmarks are notoriously biased, so we'll have to wait and see the independent reviews before anyone can claim performance superiority.
If a $500 card is roughly the same as a 1080 (currently $550), I think that's fair.
The $700 liquid-cooled Vega should overclock better, but I have my doubts for its efficiency. Overclocking isn't very good for heat and noise and reliability.
I really hope that AMD gets better deep learning support so it will push NVIDIA's pricing structure down. Also, AMD's processor lineup looks very appealing compared to Intel (which has had more pressure on their solutions also. (moved from dupe thread).
Backends for the deep learning frameworks in use are CPU or CUDA. OpenCL support is essentially non-existent. There's no point benchmarking something that isn't going to be used.
Just curious any people own $AMD shares? I was surprised to learn that AMD is the second most owned company on Robinhood trading platform behind Apple. I bought in at $10.47 after last earnings absolutely destroyed it. Just sold right before current earnings at $14.12 and looking to maybe buy back in lower than I sold for. Closed today at $13.61.
That's interesting, didn't know the stock was so popular on Robinhood. I remember around 2 years ago their stock price was in the $2 range and r/wallstreetbets were all talking about going "YOLO" with AMD. Pretty nice comeback for their stock. Lisa Su seems like a great CEO.
http://www.tomshardware.com/news/amd-radeon-rx-vega-64-specs...
...unlike nVidia:
http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1...
While that's something that developers have to explicitly support, it will be interesting to see what happens when they do.