Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do a lot of 3d rendering with Blender Cycles so I'm really happy to see the bump in cuda cores. The gtx 1080 didn't offer any significant performance boost over the 980 when it comes to rendering in blender. Right now for Cycles you get the most performance per dollar by buying multiple 1060s (blender doesn't rely on sli when rendering with multiple gpus). I've got my fingers crossed that this card will offer some tangible performance boosts.


Hey I'm experimenting with blender too,

Question: Ignoring sli, do you have different GTX versions plugged into your Motherboard for rendering? Like, 2 1060s and a 980Ti ?

Have you tried background task rendering via commandline, targeting the GPU as workers, but also using the CPU?

e.g. iF you have 2 graphics cards. Run 3 commandline blender render jobs (2GPU + 1 CPU).


I haven't used multiple gpus with different versions, but everything I've read suggests it should work. If one gpu is significantly slower than the other you might run into situations where the slower gpu gets stuck working on the last tile while the faster gpu sits idle, making your total render time go up. I imagine this would be pretty rare, and a little effort to modify the tile size would mitigate any issues.

As far as rendering from the command line with both the cpu and gpu its definitely possible albeit a little hackish. Rendering from the command line uses the cpu by default. To render using the gpu you have to pass a python script as an argument that changes the render device using the blender python api. Blender supports multiple gpus out of the box, so there is no reason to split them up into separate jobs (even different model gpus that don't support sli). You'd only need one job for the cpu and one for the gpu(s). The tricky part is making sure the cpu and gpu work on different things. For animations you'd probably want to change the render step option. Setting it to 2 would make blender render every other frame, so the cpu would work on the even number frames and the gpu would work on the odd frames. For single frame renders you could set the cycles seed value for both devices and then mix the two generated images together. Both the seed value and the step option can be set in the python script which means its pretty easy to automate the entire process. It definitly not trivial to get working so at some point you need to decide if the 0.1x speed bump from adding the cpu is worth the effort. Any new nvidia gpu is going to be worlds faster than whatever cpu you might be using.

See here for instructions on stacking cycles renders with different seed values:

http://blender.stackexchange.com/questions/5017/stacking-cyc...


> Rendering from the command line uses the cpu by default.

I don't believe this is still true. When I render from the command line, it will use the GPU if my user preferences are set to GPU. (confirmed by render timings)


Oh nice, I'm really happy to hear that because passing in a python script is a pain. I don't remember seeing it in the release notes. Maybe the developers didn't consider it a big enough change to warrant writing down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: