Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll be honest, it's very hard for me to imagine what I would do that would demand anywhere near 600 watts for a graphics card alone. I mean my PC can draw at most 120 watts and that feels like a lot, although the PC is pretty old by now. All of this for crypto? How many games are there out there that draw anywhere near this level of wattage just for graphics?


> How many games are there out there that draw anywhere near this level of wattage just for graphics?

Pretty much any “AAA” game at 120+Hz, high resolution, with image quality cranked up.

I’ve had friends who managed to snag 3090s at prices they found acceptable (sic…), and had to change psus because theirs were a tad short (despite being theoretically ok going off of nameplate capacity, though it’s also possible the rails distribution was off). They were rebuilding gaming rigs, no crypto there.

Technically the 2x8 / 1x12 setup of the 3090 is off-spec, these are not officially supported PCI configurations.


I do photogrammetry. That's one use that will max out the entire system (not just the graphics card) for hours on end that has nothing to do with games or cc's.

Just as an example.


What in particular are you doing with it? I have the feeling my question may be a bit light on details for you to be able to formulate a great answer, but are you doing things like doing test renderings of an interpolated viewpoint given multiple integrated perspectives? Edge detections? Object "identity" detection (x, y, z) object in perspective W is (a, b, c) in perspective D? That sort of thing?

I could see how that could end up getting very computation intensive very quickly! Building up the 3-D space, tagging spots where the math doesn't quite work out... Compensating for different lighting conditions from different pictures at different angles to get just the info you're looking for!

Or am I completely off base?


There are programs that do all of this for you. I just take the photos and use the models in other things.


600W worth of photogrammetry?


If you're doing it for work, the faster it goes the more money you make. So its not a question of 600w of photogrammetry, but 'how fast can we make it' and the resulting electrical load.


> 600W worth of photogrammetry?

Yes, no doubt. Specially nowadays when point clouds are all the rage.


You ask a question, you get an answer and then you doubt the answer. That's not nice.


Its also an invitation to brag about their work! Its an incredible claim with a factual answer, its rare you get to show off about your niche like that right?


Whatever your problem is if it's remotely calculation heavy you can draw as much power as the hardware will let you (if the software handles it)


If I don't want to wait for 200 hours at a time, yes sure.


Just built a system that I managed to get a 3080Ti for at MSRP. I’ve spent a lot of effort to have it be silent (and extremely cool) with just air cooling, even when it does play games.

With a 5900X (slight overclock for max boost at 5.1GHz), some lightly overclocked 64GB of DDR4 memory, two 27” 1440p displays, a phone dock, KB, mouse, web cam and condenser mic it idles at about 150W.

General productivity, like development with a few VMs spun up, and maybe watching a 1080p video in the background? Maybe 180W.

Streaming a game. So I’m capping myself at 1080p60, which does not stress the 3080Ti with all settings maxed out. Two LED light rings also on, plus an LED pendant lamp. Now we’re squarely in 350-400W land for the entire setup.

Not streaming. 1440p, capping at 120fps (displays go to 170Hz). If settings are cranked up on a very recent title and I’m GPU bound, this is where the GPU alone can actually hit 400W by itself in some scenes (usually 280-350W) and the total power consumption for the room maxes at 600-650W.

—- ——-

There’s 6 140mm PWM case fans, the CPU has two 150mm PWM fans, and the GPU has three 92mm PWM fans. Case is all steel with sound deadening foam on both side panels. In every settings it’s silent unless I’m “going for broke” at max quality for averaging 120fps, where it’ll then be a “light whirr” once it’s heat soaked, which I’ve measured at 24.8dBA at 3 feet (~92 cm). GPU peaks at 76C, GPU memory peaks at 70C, CPU is around 65C, and system memory at 35C. All air cooled.

—- ————

Here’s the thing. Come next year, it’ll be a 32-core Threadripper with 256GB of memory and a NVMe RAID 10 array for real work with Spark without constantly paying by the hour. The CPU, storage, and system memory alone will probably pull 500W under max load factoring in the displays, KB and mouse. Still will idle close to 165W though.


Gotta think about the scale here. 4k gaming pushes 8 million pixels, 60 times per second. Those pixels are being calculated from light bouncing off of meshes, each formed of 10,000-100,000 triangles. More triangles makes your objects look smoother and rounder. Each triangle also has texture information that dictates its color, as well as some specific properties such as reflectiveness. To achieve highly-realistic video games (and CGI), you simply have to have that level of complexity. Even then, the way light is calculated in video games is not realistic at all, it is done that way for significant performance/efficiency gains. Ray tracing is now being added, which is how light behaves in the real world, but that cranks the computational complexity through the roof. That's why it takes thousands of CPU cores 25 hours to render one single frame for a Pixar movie, they go for realistic physics/light.


Yeah, just wow. Here's what I run fine even under load behind a 600W UPS:

  * 15-20 arm64 SBCs 
  * 5 Ryzen nodes
  * a handful of HDDs and a decent number of SSDs
  * an enterpriseish 48-port PoE switch using 10GBE on some ports
  * 5 x86 SBCs (routers)

That's a crazy amount of energy for one GPU used for gaming.


I doubt it's for crypto though. Running two cheaper 300W GPUs will be more cost effective. Also serious miners already moved on to ASIC anyways.


Seems to depend on the coin. There are lots of coins to mine/etc with a gpu.


Ethereum is mined with GPUs. The total mining rewards (block rewards + fees) are actually marginally higher for Ethereum than Bitcoin, with both around $51M per day


4K Gaming at 120FPS with Ray Tracing would easily push pass 300 to 400W. Even on a hypothetical 5nm GPU.


Ok, this is a tangent, but this kind of stuff irks me. Is there no love anymore for making things pretty enough but actually keeping things efficient? Do game devs (even AAA people) not like the family of things like 0x5F3759DF anymore?

Efficiency aside, simply pushing the power requirements doesn't even bring home the bacon. Nintendo's "lateral thinking with withered technology" works super well, just look at the switch compared to any of the consoles of its generation. I know PC gaming is different but even there people dig streams of undertale over cyberpunk (although cyberpunk had issues...), so not everything is pushing power requirements.

Literally, if all that matters is "realism" and "power" and pushing that boundary is an end in and of itself, well that boundary is already pushed by others. I mean may be not with a GPU but I work in HPC and I'm sure as hell my simulations draw more than 600W. So, this is a push for power in games specifically and that just starts to draw out my boomer side and wonder why people just don't try to go back and optimize their code first.


I think this is three different set of questions.

1. Should Games really be pushing GFx heavy / realism vs Nintendo Style Game?

Well first I absolutely love Nintendo and their Games. But the answer is first, Nintendo's target market is very different to PS5, Xbox / PC. I am sure Battlefield fans just dont think it will work on Nintendo style graphics. Second it really have nothing to do with Game Dev, more to do with Games priorities. i.e The question then becomes a Design / Market Fit / Choices questions rather than a technical question.

2. Keeping Games Efficient.

AAA Games, or as a matter of fact nearly all Games are efficient, comparatively speaking. Just look at how browsers, one of the most optimised piece of software is still learning their lessons from Game Dev. Games are using as much power as they could to push the limit on Graphics, and Physics within Games. That is not an efficiency question, but how much power they could push the boundary. Remember 4K Res is FOUR times the resolution of 2K. 120 Fps will be TWO times the normal standard of 60fps, that is 8 times the complexity alone excluding any additional high quality asset or modelling. I didn't even put Ray Tracing in that equation into it. Even if you assume 600W gives you double the performance of 300W ( which it doesn't ). You already see the disparity between complexity and performance improvement.

Finally, the third point I want to mention is hypothetically speaking Ray Tracing will lower the cost of Graphics Asset, which is by far the largest cost centre for Modern Games. Since designer no longer have to do all sort of special tricks for effects and save them some time. Although I imagine in reality these saved cost and time will only be used somewhere else for even better graphics.


No comment on 3.

On 1, the point is that graphics and what you call game dev isn't really that necessary to sell or to be fun. It was in response mostly to people who for some reason think it is, especially those who think that "battlefield" or "the last of us" games are a different category of game. Idk, that sort of thinking is a big problem and part of why games have moved away from being fun and being what lead to the rise of "press f to pay respects" being a meme. Too much emphasis on very American thoughts of what makes a game good: graphics, intricacy or complexity, and too much story telling vs what actually does without fail, just being fun. The thing is making something fun is actually pretty difficult because it require ingenuity and creativity, while the others just require grinding and there is already a path forward. Again, I said this was a tangent because it is, I'm just tired of modern games just plain not being fun and being too geared towards being interactive movies with eye candy. The enabling (requiring) of 600W cards for a video game is just yet another result of this slide of the larger game dev space which is why I bring it up.

On 2, I know you're saying comparatively but really just compare to games from 10 years ago, 20 years ago. I remember talking to game developers who need to cut corners say they don't worry about handling memory for strings "that are just a few megabytes anyway" and sure may be you don't need to worry about that today but no, by an absolute scale gamedevs are neglecting things like string processing[0] whilst caring so damn much about graphics, so you can't argue that things are not more inefficient now. I get your points about 120hz 4k though, though honestly if these are the power requirements, then I'm starting to develop an opinion that 120hz might make gaming start to be an activity that is bad for other reasons, particularly climate change as is argued in other threads in this comment section.

Anyway, the general reason for this rant is again, AAA game devs are approaching a similar state like the js community, whereas instead of being caught in a cycle of churning frameworks, they're caught in an arms race to try to make things bigger and more "realistic" and eventually more resource consuming whilst getting their lunch being eaten by (relatively, in AAA's eyes) garbage looking things like fortnite, animal crossing, undertale, with vastly lower budgets. I want them to realize these massively successful games which aren't in the vein of what you think of being AAA diffuses the fog and makes studios which have gobs of money realize what actually makes games good and thus which will actually make them sell.

[0] https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times... [1] https://www.reddit.com/r/Undertale/comments/4auo3n/putting_u...


Gamedev is still full of clever people pushing graphics cards to their limits. The kinds of tricks are different nowadays, as most of the work is done on the GPU, and the important thing is to keep it busy with highly parallel, branchless code with good memory locality.


My point is being embarrassingly parallel over being efficient and smart isn't actually being clever, hence why the phrase is "embarrasingly parallel." That said, I often need get work done (publish papers) so I opt to do that over being more clever in my code.

I sort of looked up to gamedevs because they seem to accomplish a good experience without having to model everything, and its an attitude I've fantasized about bringing to my work. It sounds however like they are moving in our direction which is disappointing I guess.


4k 120hz isn't efficient? You'd see higher but we don't even have higher display connection specs yet.

You're also conflating graphics, game design, and hardware design. These are not the same groups. Every pillar is pushing forward. There's no zero sum game there.


For a game dev they'd rather release a broken game and start selling sooner than spend time and money polishing it and not selling copies. Gamers don't buy games because the code is performant.


Ok you're explaining a problem though (cyberpunk was definitely a problem) albeit matter of fact, but clearly examples show that lack of performance hurts sales. You're right, people don't buy games because they are performant, they buy them because they are fun, but if your game is so unperformant then it won't be fun...ergo sales sink.

Fewer people by virtue of how expensive 120+Hz equipment (not just the monitors but everything else) can afford that stuff, and if you're limiting your $60 game to just those people that also reduces sales. Again, the logic isn't there, and you won't beat Animal Crossing New Horizons in earnings so even the economics isn't there. The logic is totally wrong.


Nobody is limiting their game to people with absolutely high-end setups.


> All of this for crypto?

ML training


> How many games are there out there that draw anywhere near this level of wattage just for graphics?

Well, since its the brand new, top-of-the-line connector, hopefully very few current applications will max it out.


90% of this for crypto, the rest for ML and games




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: