Hacker News new | past | comments | ask | show | jobs | submit login
AMD to Nvidia: prove it, don’t just say it (amd.com)
126 points by Garbage on March 27, 2011 | hide | past | favorite | 37 comments



There's a review of both cards in the latest c't Magazine [1] (German) - the benchmarks suggest that the AMD card wins in most disciplines, except for tesselation benchmarks and a couple of games at lower resolutions. AMD's OpenGL drivers are also (and have always been) much worse than nvidia's in various respects, so the nvidia card wins on more OpenGL-based benchmarks.

Both cards seem completely removed from reality though, as they apparently sound like jet engines under load, consume almost 400 Watts of power, and the 590GTX actually throttles itself if you run it under certain loads (such as Furmark) to avoid overheating/drawing too much current. Oh, and they cost over €600. I realise "maker of the fastest graphics card" is a marketing thing, but it's pretty meaningless in practice.

FWIW, if you want to spend a lot of money on graphics cards, you're probably better off buying a fairly high-end model (but not top-of-the-range) every hardware iteration, usually 6-12 months, and selling the previous one instead of dropping a crapload of money in one go.

[1] 590GTX review: http://www.heise.de/ct/artikel/Gegenangriff-1213480.html

6990HD review: http://www.heise.de/ct/inhalt/2011/08/74/


BTW, you can recover the cost of AMD videocard mining bitcoins :)

If mining in a pool, for 5970 it's around 7 BTC/day with the current difficulty. (Unfortunately Nvidia is much slower in integer calculations).

Our mining pool: http://deepbit.net


How do bitcoins compare to the cost of electricity? I've got a (fairly modest) Radeon HD 5770, which consumes around 100W under load. At €0.20/kWh, that's €0.48/day (leaving aside the power needed for running the rest of the system). For sake of argument, let's say I can get 2BTC/day with this hardware. Is that worth more than the 48 cents it cost me to do the calculations? (sorry if the answer to this is obvious, I've only heard of bitcoins, never looked into them)

This aside from the moral concern about the environmental impact of wasting energy on pointless hash calculations. I probably waste more energy in other, albeit less obvious ways.


It appears current exchange rates trade bitcoin at 1BTC = 0.80USD.

https://www.bitcoinmarket.com/

I would be worried about how thin the market is though, as each significant price increase seems to relate to a specific piece in the online tech press. In that way it has a lot in common technically with a pump and dump scheme. As long as you're running a small enough farm I suppose I wouldn't worry too much about it as long as you're philosophically good with the increased energy use.


Currently you can sell 1 BTC for around $0.85-$0.9: https://mtgox.com/trade/history

That is not a waste of energy, it's required for protecting transactions from double spending.

If it's a waste, than banks computer farms is a waste too.


If it's a waste, than banks computer farms is a waste too.

Sorry, I don't understand this part. I'm pretty sure real currency isn't backed by hash function brute forcing. (I assume they use contracts along with some form of public key cryptographic hashing to sign transactions.)


Banks run a lot of servers for accounting, online banking, swift and other internal needs.

Bitcoin network does it with the help of cryptography and proof of work hashing. Probably energy spendings are comparable.


I had never heard of bitcoin before, thanks for introducing me to something new!


> the benchmarks suggest that the AMD card wins in most disciplines

Anandtech's 590 review [0] does not really agree. It sees the cards as a wash overall (though the choice of games benched will influence the winner a lot), compute is NVidia's domain, and the 590 is significantly quieter than the jet turbine of the 6990 (60dB versus an incredible 70dB on HAWX)

[0] http://www.anandtech.com/show/4239/nvidias-geforce-gtx-590-d...


I currently have a Radeon 5970- Its my first ATI card out of many nvidia cards in the past.

I will never, ever, EVER get a ATI card again. While it might be faster, the driver support is so horrible its not worth it. Crashes, lousy bugfixes, catalyst control panel sucks, etc..

Also, now that nvidia has their own eyefinity support, the only reason I had to try ATI in the first place is no longer relevant.

My next card will definitely be a nvidia card. I dont care if its marginally slower than the better ATI card. Its worth it to have not hacked together software/drivers.

ATI should focus on better software/drivers/etc.. and less on getting 1 extra FPS over the nvidia cards.


One note: catalyst control panel just got a major redesign that is actually usable. For those who don't know, in the past CCC first showed you an ad when you started it up, then made you open a menu to choose a vague sounding option, then click a tab etc. etc. Once of the most annoying pieces of software I've ever used. Now it's the standard list of categories on the left, options on the right, and the world is finally at peace.


Driver support for what? I don't see either company supporting kms, gem, wayland, etc.


AMD does help the development of the open-source radeon driver that uses kms, has been modified to use the gemified ttm (which is assumed to be a step in moving to gem support once everything fits), and certainly supports wayland.

AMD also makes and supports the non-foss fglrx driver, which they cannot open-source due to licensing restrictions. The open-source driver is not yet even near to fglrx in functionality or speed, but it does seem to be slowly catching up. Some features will probably never work with the open-source drivers -- notably the video decoder, because AMD is afraid that if they release it's specs, the HDCP key embedded within can be stolen. (That this point is completely moot as HDCP is broken doesn't seem to matter to them.)


I've always considered ATI's excuses for not providing access to certain parts of hardware to be bullshit.

In past, they tried to avoid publishing specs for TV-out, due to Macrovision (or so was the excuse). In the end, it was reverse engineered, later TV-out got obsolete and today, nobody cares. ATI however grew an user base that will not purchase ATI card again.

Today, they make life more difficult to those, who want to use UVD. UVD is quite simple functionality, basically you provide compressed buffer and the card decompresses it. You can put the result back to system ram, into texture, whereever, it has nothing common with the output (In theory, it may flag originally AACS-ed buffers and enforce secure video path for them, but in practice nobody cares, the pirates have better ways to capture content anyway). So they again are losing those who want to build video players for the living room due to the theoretical attack that nobody cares about. So they think Zacate will be more popular that Atom+Ion? I doubt it and quick look at xbmc forums confirms my opinion.


Agreed. You only own ATI once.


The trouble is, many of us can make similarly bold claims about NVIDIA, who also aren't above shipping unstable drivers, fudging benchmarks, or more recently just outright crippling the performance of their regular cards to make the workstation-class cards that cost an order of magnitude more seem more attractive. They're also heavily promoting their proprietary CUDA technology over open standards, yada yada.

If a credible third party competitor came along, whose cards were fast enough and whose focus was on quality and support rather than winning every benchmark they can, I would be happy to try their kit instead of either AMD or NVIDIA next time I put together a machine. Unfortunately, as long as there are only two serious players in the game, the attitude of "never again" just means you can never build more than two PCs, which doesn't really get us anywhere.


The weird thing about competing in the "fastest video card" space is that it's always these dual gpu cards which pay significant performance penalties to split the workloads between the two chips as compared to 2x a single gpu. The difference between a winner and loser in the dual gpu shootout (when things are pretty close otherwise) probably amounts to the quality of the gpu splitting code in the drivers (the same stuff as the sli/crossfire code as I understand it).

While there is no doubt that real world tests of these dual gpu powerhouses will show who can eek out the most fps, if you really just want to compare chip tech vs chip tech it's probably better to compare at the single top end gpu vs. single top end gpu level.


Dears at AMD,

From my own private perspective (hacker/developer !gamer) nvidia is providing a comprehensive toolkit and documentation for developing on top of the GPU platform (http://developer.nvidia.com/object/gpucomputing.html)

Do you have the same?


ATI Stream SDK


AMD Accelerated Parallel Processing (APP) SDK (formerly ATI Stream)

http://developer.amd.com/gpu/AMDAPPSDK/Pages/default.aspx


You're right that AMD has traditionally been a little behind NVIDIA on the software side. Manju Hedge (of PhysX fame) is now leading a team at AMD that focuses exclusively on making the GPU computing as easy to use and as painless as possible, so hopefully things should improve over the next few months.


I was not stating any opinions, rather asking an honest question.

I was glad to discover that APP SDK, and will look at it quite soon, though the device in question is not yet listed in the supported devices page http://developer.amd.com/gpu/AMDAPPSDK/pages/DriverCompatibi.... I find the GPU platform as an interesting field for R&D activity and perhaps, one day, I can even build a startup which takes advantage of this platform. Time will tell


At the risk of stating the obvious, openCL seems to be their answer:

http://www.amd.com/us/products/technologies/stream-technolog...

I'm not sure if you're saying those efforts aren't good enough (I wouldn't know). While I am sure nvidia produces more work effort in cuda, it seems that AMD is trying to treat OpenCL as a broad standard. In that way you expect a fair amount of information from Apple regarding ObjC for example, but are probably OK with AMD not covering C++ that specifically.


AMD just began supporting OdenCL, but (of course; it's an NVIDIA thing) don't support CUDU.


Says the company which previously hacked the drivers to be higher in benchmarks... Well that was ATI on its own back then, but still - not such a good idea for people who remember that.


Nvidia too has a history of cheating in benchmarks.

Here is one report from Futuremark themselves (makers of 3DMark): http://www.futuremark.com/pressroom/companypdfs/3dmark03_aud...

More recent PhysX cheating: http://www.theinquirer.net/inquirer/news/1048824/nvidia-chea...


In the appendix of that Futuremark report, the screen caps are rendered so badly, it looks more like serious driver bugs than cheating. Why were Futuremark so sure it's not bad drivers or some other technical fault? Malice vs. incompetence etc.


They state that those rendering errors are only visible when using free-camera mode, not the standard pre-defined camera. That's why it looks buggy, it was never "optimized" to look well outside of the benchmark's default camera settings.

They also point out that preventing NVIDIA drivers from detecting 3DMark results in the scene being rendered correctly.

If these claims are true NVIDIA can't shift the blame to some incompetent programmer.


Putting it that baldly is very unfair to AMD.

- They both widely "cheat" on games. If Game A asks for X, the driver returns Y. Usually this improves the user experience, but sometimes it is done more for benchmarkship.

- They both heavily optimize for benchmarks.

- In one case, AMD crossed the lean and their optimization did not return identical results as non-optimized.

Whether or not AMD did this deliberately is up for debate. If you've ever had to optimize something, you know how easy it is to screw up and return stale cache results or screw up in some other way.

Yes, AMD deserved to get slapped for their mistake, but I don't think one incidence is evidence of a culture any worse than NVidia's.


Ah, don't get me wrong, I did not mean to say nvidia is any better. I was just amused that both companies say "we're the best", both are known to have a "creative" approach to benchmarking and one of them sets a challenge with "prove you're the best with benchmarks".


I agree with the premise of the article - however, as one notch in Nvidia's favor, I think most benchmarks demonstrate that the GTX 590 scales better in SLI than the Radeon 6990 does in Crossfire. Provided, you have a nuclear power plant in your computer to power SLI/Crossfire...


I wish they'd put up some honest info about their workstation vs. gamer cards as well. We all know it's mostly the same hardware under the hood in many cases, and that the software and/or support aspects are where they try to justify the cost. Still, if you need to put together a new multi-purpose machine for any sort of serious graphics/video/multimedia work, it's next to impossible to find any meaningful guidance on what sort of spec is best. I'd have a lot more respect for an argument about nVidia not quoting benchmarks if AMD themselves didn't just assume that if you're running software made by say Adobe or Autodesk you should probably buy a workstation card, because.


This argument is stupid. Overall, they won't sell more than a few of these cards. The vast majority of computers out there (I would guess 99%) can't support a card like this.

At the mid-range, the two need to remain essentially price-competitive at most performance levels, leading to a stalemate. The two things this has done is:

1) Drive profit margins way down

2) Increase bang for your buck at lower price levels. It's amazing how much card you can get for < $150, and it almost doesn't make sense to buy a card for more than $250 (the current $200-$250 cards are nearly as fast and much cooler/quieter than the top-of-the-line from a year ago).

Maybe having the highest-performing card will give the perception that the lower-end cards also perform better, but I would guess not.

In real news, I recently bought a Geforce GTX 460 for $90 after rebates. It's pretty much enough card for the vast majority of games, and I could buy 7 for less than the price of one of these cards. And, best of all, it's quieter than my case fan nearly all of the time.


So to put it in simple terms:

Chrysler is claiming they are better than Volkswagon because they have a deal with fiat who make Ferrari who have a car that can go around one specific track faster than a Lamborghini which is owned by VW


Nvidia seems to be betting a bit of its future on the superiority of CUDA over OpenCL for general purpose computing. They share the same model of parallelism model, but Nvidia keeps baking more support for unrestrained C++ into both the hardware and compiler (see recent addition of function tables, unified address space, general purpose cache). I think if AMD continues to focus primarily on gaming then it's just a matter of time before their cards/drivers have some slight edge in the gaming domain.


Nvidia already has grabbed a huge chunk of market and mindshare in HPC, and have stated that Fermi was developed as a compute platform as much as a GPU. The "most FPS" contest is really beside the point. AMD is going to suffer as soon as optional CUDA acceleration in common libraries becomes more widespread.


I suppose the question is why anyone would support CUDA only, instead of something like OpenCL. NVIDIA have been promoting CUDA quite aggressively for some time, but the bottom line is that even today, several years after these technologies became widely available, most mainstream applications for graphics, video, CAD etc. still don't use that power on either AMD or NVIDIA graphics cards. If CUDA really has meaningful technical advantages over OpenCL, why aren't these applications using it?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: