We have tested for you: Arnold 6
Launched in beta version in Spring 2019 with release number 5.3, Arnold GPU finally launches its final version, relase number 6. A major turning-point in the life of the renderer.
What I mostly expect from a CPU/GPU renderer is a seamless switch between the two calculation methods. No rendering differences between the shaders and no limitations in the use of displacement, AOVs, volumes, etc.
Arnold GPU handles SSS, hair, OpenVDB, instances and procedurals and atmospheric effects.
Very few renderers completely succeeded their portage from CPU to GPU. What about Arnold?
My Arnold 6 test is made within 3ds Max. The choice between CPU and GPU calcultation is made in the render setup, in the System tab. When you launch a GPU rendering for the 1st time, Arnold must calcultate a “GPU cache” of the various shaders and objects. It can take several minutes depending on the scene complexity. It took up to 3 minutes on my test scenes. You can also force the calculation of this cache. To be done during coffee break…
Once the cache has been calculated, the renderings start quicker. You can also monitor the GPUs you wish to use in the same tab. I have two graphic cards: an old GTX 1080 and a more recent RTX 2070 super. Arnold advises not to use 2 cards which do not support the same version of CUDA.
I used the various test scenes available on the Autodesk website to compare the rendering speed between my processor (core i7 970 – 6 cores) and the 2 graphic cards. In all cases -and with the same settings- the GPU is faster. It is also interesting to see the differences between the RTX and GTX cards, especially when there is diffusion in the environment.
But you have to bare in mind though that, with the same setting, the noise generated by the GPU is superior to the one made by the CPU. If you change the GPU setting to reach the same level of noise as with the CPU, GPU wins again. There is a procedure described in the documentation to standardize the levels of noise between GPU and CPU.
Comparison of durations
CPU vs GPU (2070 RTX) vs GPU (1080 GTX)
To conclude
GPU limitations seems easier to bypass if the project is well structured and make a full project with the GPU seems possible with this release. But I still have doubts about the possibility to render easily (without modifying the settings) a sequence using both CPU and GPU rendering.
I have also been quite surprised by the difference of performance between RTX and GTX cards.
This evolution is a success and it will definitely please a lot of Arnold users.