4 Easy Ways to Speed Up Cycles

Did you know that the Blender Internal render engine has been discontinued?

*gasp!*

I know. Apparently the decision was made 2 years ago that the Internal Render Engine development would cease, and be focused on Cycles instead.

For some reason this news was never broadcast to the community, so I’m doing it right now.

What does this mean? It means that the future of Blender will likely be Cycles only. If you haven’t made the switch already, then you may want to consider it. This tutorial may help.

I posted this news recently on the Blender Guru Facebook Page and besides the initial shock, the overwhelming response was “Noooo! Cycles is too slow!

But here’s the thing: Cycles can be fast. Competitively fast in fact. But you have to know how to tweak it.

In the words of Thomas Dinges (developer), “The Internal rendering engine was built for speed, but if you wanted realism you had to turn stuff on. Cycles is the other way round. It’s built for realism, and if you want it fast you have to turn stuff OFF.” (said during a conversation at the 2012 Blender Conference).

So without further ado, here’s a list of 4 Easy Ways You Can Speed Up Cycles…

(Russian Translation)

1. Switch to GPU rendering

This may sound like an obvious tip to some of you, but a lot of users aren’t aware just how much faster GPU rendering is.

Take this scene for example:

BMW Benchmark Scene by Mike Pan - Download it here.

BMW Benchmark Scene by Mike Pan – Download it here.

On a CPU it renders in 9 minutes 34 seconds.

On a GPU it renders in 46 seconds.

“Whaaaaaaa…!”

That’s right. Simply by changing one setting you can improve your rendertimes by x12! It’s the single best thing you can do to improve your rendertimes.

To enable GPU rendering, go to File>User Preferences>System and under Compute Device, check CUDA. Then in the render panel you will have the option to change the device to GPU (screenshot).

NOTE: GPU rendering is currently only possible on Nvidia graphics cards. Support for AMD cards has been put on-hold due to driver and hardware limitations.

2. Reduce the amount of bounces

Comparison of Bounce settings in Cycles

Comparison of Bounce settings in Cycles

One of the biggest reasons that Cycles takes so long to render is because it calculates light bounces.

What are light bounces you ask? Light bounces are indirect light that bounces off walls and other objects. It’s what makes the scene look so good in comparison to the Internal renderer.

However, this realism comes at the price of render times.

By default the maximum amount of Light Bounces is set to 8. Which is far too high in my opinion. I use Cycles a lot, and I rarely need more than 4 bounces for adequate realism.

To change the number of bounces, go to the render panel and under Light Paths, you’ll find Bounces (screenshot). Set the Min to 0 and Max to a low setting. Experiment with the setting till you find a value that achieves a good amount of realism, but without sacrificing too much in rendertimes.

For even more fine tuning, you can adjust the amount of bounces for individual light path types like diffuse, transmission and glossy. In the example above, I would set the Transmission amount higher than others as it is the most noticeable when reduced.

3. Change the Tile Size

Another awesome, yet relatively unknown way of reducing render times, is to change the “Tile Size”.

What are Tiles? Tiles are those little boxes that appear on your screen while blender is rendering.

TILES. If I had a dollar for every minute I've spent watching these squares, I could retire.

OH TILES. If I had a dollar for every minute I’ve spent watching these squares unfold, I could retire.

Tiles are awesome, because it allows the processor to focus on a smaller portion of the scene and save memory, thus reducing crashes.

Blender always had the ability to change the number of tiles, but recently a code change made it so you can change the exact tile dimensions:

The new way of setting tiles in Blender...

The new way of tiling in Blender…

That’s right. Tiles are no longer defined by their count, but by their size in pixels. This was introduced because certain sizes render faster than others. (If you don’t have Tile Sizes, download the latest trunk version of Blender.)

I conducted some studies using this scene, and came to these results:

Yes… Tiles matters a whole lot more than you may have thought.

Interestingly, the fastest render time for CPU is the slowest on the GPU. This due to the GPU only being able to render one tile at a time, so it doesn’t benefit from more tiles.

In summary, the optimal tile size for GPU is 256 x 256. For CPU it’s 16 x 16. And if those don’t work for you, try to keep it in the power of 2s (eg. 128, 256, 512, 1024), as the processor handles these faster.

4. Reduce Your Samples

Put this one in the “duh” category, since most people already know it. But samples are the biggest time sucker of all, so I need to mention it.

What are Samples? Samples are the noise that appear as your scene is rendering. In the render panel you define the number of samples, and then blender stops once it reaches it. The more samples, the clearer, but longer your render is.

More samples are generally good. But there comes a point where more samples does almost nothing.

Take these two examples:

2000 samples – 9 minutes

5000 samples - 21 minutes 23 seconds

5000 samples – 21 minutes

Did you really need those extra 3000 samples? Unless you’re a pixel scientist (I’m sure the job exists somewhere) you probably wouldn’t have noticed much of a difference. And if you did, you could always put the render through photoshop to clear up any remaining noise.

If you’re only rendering a still, an extra 12 minutes probably isn’t anything to stress over if it’s for the final render. But if you’re rendering an animation? Well those frames will add up very quickly.

Moral of the story: Don’t go overboard on the samples, because the audience probably won’t notice anyway. Experiment and see how few samples you can get away with.

Got a tip for speeding up Cycles? Post it in the comments below.

Russian Flag IconRead the Russian Translation

Tags: ,

About Andrew Price

User of Blender for 9+ years. I've written tutorials for 3d World Magazine and spoken at three Blender conferences. My goal is to help artists get employed in the industry by making training accessible and easy to understand. I'm an Aussie and I live in South Korea ;)
  • Enrico Mandirola

    Hi, I´m on Blender 2.70a – Imac with AMD Radeon 6770M… I can chose the GPU on system preferences, but when i start the render… Blender Block!! Did anyone have an idea please?? Maybe I`ve to wait the next relase??

    Thanks to all

    EM

  • SilverWolf

    This is absolutely fantastic!!! My animation rendering time was really cut by a large portion by just switching from my CPU to my GPU. Wanna know the difference? Well, I’ve been rendering for about 10 minutes and I am at 83/135 frames. With my CPU, I’d be at about 40-50 frames. Its not a huge difference, but to me, time is money! As an even bigger surprise, I’m using the Intel i7-3770K and the EVGA GTX 780 SuperClocked Edition. Yup. I love you Blender~! Thanks Mr. Price!

  • Mike Lucks

    I basically love you for writing this. Wondering, can you explain the Sampling options more in-depth, or at least point me to an article?

  • Javier Diaz de Leon

    Can anyone say if Cuda 5.5 (and/or 6 for that matter), yield faster rendering in Blender in Ubuntu 13.10 or 14.xx than Cuda 5.0?
    Btw, just installed Cuda 5.0.35 from Synaptic, but Blender fails to render whenever I’m to use CUDA + GPU as the compute device. It does render if I’m to switch to CPU, or Blender Internal renderer.

  • Damian Poirier

    I have a Geforce 9800 gt with 112 CUDa cores. I enabled CUDA in preferences. Set render to GPU Compute. Blender 2.69. Yet pressing the render button gives a screen flicker then no change, no render, not even a progress bar. CPU render works fine.
    Any Ideas what I’m doing wrong?

    • Kelli

      From the official Blender wiki:

      NVidia CUDA is supported for GPU rendering with NVidia graphics cards. We support graphics cards starting from GTX 4xx (computing capability 2.0).

      GeForce 9800 GT does have 112 CUDA cores but it only has compute capability 1.0 according to: https://developer.nvidia.com/cuda-gpus

      • Damian Poirier

        Thank you for confirming my suspicions. Time for a new card. :-(

  • SuperChris01

    Is there any way to use AMD graphic cards (even though Blender aren’t finished working on it?). Even if there are some bugs, I’d like to at least get some kind of render faster than 1-3 picture a night, depending on my samples and resolution.

  • Pingback: Blender Cycles (Novato)

  • Sean K. Salomaa

    I’m building a town. I’m half way through my 5th building. When I render in GPU (GTX770M in SLI) I start to get purple textures instead of windows, bricks, etc.. & it takes 10 seconds. When I render in CPU (i7 4800MQ) all the textures are there and it took 2 seconds. Usually I use GPU & it’s far superior to the CPU, but I guess I’ve reached the GPU memory limit… I’d still like to find out more info on this predicament.

    • The Helper

      This happens to me. Usually when I UV unwrap an object and put a texture on it. ALl you have to do is re-apply the texture and it should fix it.

      • Sean K. Salomaa

        Didn’t fix it for me. As I add the buildings one by one into the preview render window, everything is fine till I add the last two buildings, then the purple textures start to show. I think its a memory thing because its random, depending on the order I add the buildings into the scene (it’s texture heavy, well over 100 now).

      • Sean K. Salomaa

        Thanks for the advice though. :)

  • JasonEnzoD

    I found this article after searching for instructions to enable the Cycles renderer, so it will be useful I’m sure, but how do I enable the renderer?

    • nastys

      In the top of the window, there is a menu with “Blender Render” selected. Select “Cycles Render”.

  • Adam Szablewski

    I foud it rendered slightly faster when using more threads. Go to Performance, Threads and click on fixed. And just increase that number.

  • Adam Szablewski

    How can I change to GPU? When I go to user preferences to system, I can’t change to GPU cause there is no such option! Btw I have Intel CORE i7. Plz help

    • Connor

      I assume that your GPU is some version of Intel Integrated. No support is given for integrated because it offers no speed advantage over CPU, or at least from what I read from a dev when trying to find why it didn’t show up for me.

  • Ray LeDuc

    Thanks for the great tips.

  • nastys

    Another way to remove noise is to use the bilateral denoising filter.
    http://www.youtube.com/watch?v=9R39TVqq-Ss

  • Ryan Schroeder

    Anything adds up for a would-be pro with 4 GB shared memory, a quad-core i5, and Intel 400 HD graphics.

  • Marco Robbesom

    This is all fine, but I really got a boost when I disabled shared memory in BIOS

  • nastys

    I figured out how to speed up Cycles without losing quality.

    Using GPU render (CUDA):
    Tiles: Top to Bottom; 512×512
    Viewport: Static BVH
    Enable Progressive refine, Save Buffers, Cache BVH, Persistent Images, Use Spatial Splits.

    Using CPU render:
    Tiles: Top to Bottom; 64×64 for simple scenes or 16×16 for heavier ones
    Viewport: Static BVH
    Enable
    Save Buffers, Cache BVH, Persistent Images, Use Spatial Splits (do NOT
    enable Progressive refine because it is slower on CPU!).

    I also recommend to render 5 samples instead of 10 and 5 max bounces instead of 8 because there is no visible difference.

    There is no performance difference if using a GPU accelerated desktop environment like Unity or a lightweight one like LXDE.

    • agor

      First of all kudos to Andy, thanks Andy!
      You don’t bother with stuff like that until you start thinking how to solve such issues.
      Secondly thanks nasty! (Read on gyus).

      I know, the reply comes after 5 months, but i want to share this:
      I am using GPU rendering to do an animation.
      The system specs are irrelevant to this, i have seen a HUGE difference using the tips noted from nasty (thanks again!), so here it goes:
      I am rendering an animation, and decided to use a rather difficult frame where lots of things happen as a testbed.
      The render time with the settings that nasty suggests turned off: 52.80
      The render time with the settings that nasty suggests turned on: 36.71
      (That’s seconds)

      Now, that is HUGE! That’s a 69.5% difference.
      Even for a single frame, that is a lot.

      Although i found that what works best for me in regards with tiling is not dividing, but using the final resolution.
      Tried 1920×1080 and 1280×720, not a big difference but still it performs better.

      If your graphics card VRAM is not limited try it.

      • agor

        Sorry that’s probably 30.5%, not 69.5, but it’s still huge…

      • nastys

        If you want more speed I have some more tips:
        - use the latest Nvidia CUDA (I tried 5.5 on Ubuntu 13.10 x64 and it’s faster!)
        - use the latest Blender, the latest builds are faster
        - disable Progressive refine if you’re rendering an animation and/or you don’t want to stop the rendering before it is complete
        - sometimes smaller tiles are faster, depending on the scene

        • agor

          Ok, thanks, i’ll try that.
          I came up with a problem that cannot (most likely) be solved with GPU rendering though:
          Subsurface Scattering.
          Any hints on that?

          Perhaps there are more, but that’s the one a bumped into until now.

  • René Aye

    Adjusting the tile size can speed things up a lot. But with my GPU a tile size with a power of two is NOT the fastest setting. I did some speed tests with the BMW test scene and the fastest setting has been 480x × 270y.
    Furthermore: I can imagine that the best tile size setting differs from scene to scene. Has somebody checked that?

  • Pingback: Auto Tile Size Addon | Adaptive Samples

  • Alberto Sánchez

    YOU REALLY HELPED ME OUT, except that i don’t hava a graphic card that allow GPU rendering, sadly i have a GT520, still, it do a very high quailty work faster than ever!!! thanks to you! and now i can fly on Flight Simulator X and still rendering, without screw my FPS on the FS, cause is a video and i put it on 80 resolution (and still looking HD!!!!) :))

  • Pingback: Cycles Versus Blender Internal | newart2000

  • Refaet Hossain Rayhan

    my cuda gpu is not working for rendering nvdia 210. what can i do?? plz help

    • Blendthusiast

      nvidia 210 has only 16 CUDA cores and 512mb memory. Unfortunately when u r on GPU rendering blender can only render texture size which is below GPU memory size. As 512mb memory is too low you will be better off with CPU rendering.

      Source: I use GTX 770 with 1536 CUDA cores and 2GB memory. Still have to delete textures when rendering an interior scene with lights.

  • Pingback: 4 способа увеличить скорость рендеринга в Cycles

  • Io Witter

    For those that need more video processing grunt I stumbled across this page that discusses how to use an external GPU via your expresscard slot (PCI):

    http://forum.techinferno.com/diy-e-gpu-projects/2109-diy-egpu-experiences-%5Bversion-2-0%5D.html#prepurchasefaq

    Disclaimer: I have not tried it (but will)

    Disclaimer two: I expect you to have enough smarts not to fry yourself on the exposed circuitry;

    I was looking at building a system with more GPU clout anyway, If it doesn’t work the components can go towards this.

  • friendstype25

    I realized that if you have low bounces and the entire scene is bright enough you can make it appear that the grainy dark spots are there artistically on purpose and save a tremendous amount of time because you can have as few as 1/10 the samples.

  • Pingback: instagram marketing tool

  • Samm Hi

    5000 samples in just 21 minutes? :O I once rendered a scene at 150 samples and it took me an hour! Unfortunately my laptop doesn’t have a gpu so I can only use cpu rendering

  • Pingback: GPU Computing is Next-Gen Reality « GK4.ME

  • Pingback: 4 способа увеличить скорость рендеринга в Cycles

  • Tawonga Donnell Msiska

    I am not really sure that points 3 and 4 are really applicable for final productions. The time improvements brought about by tile sizes are hardly noticeable (128×128 – 16×16 on cpu or 64×64 – 512×512 on CPU). Point number 4 is actually dependent on a lot of factors. Depending on how your scene is setup (materials, textures, lights, mesh polygons, particle systems etc), 80 samples may actually do the trick, set it up differently and you will see that you need 5000 samples or more to render the same scene. So we rarely have enough control on this factor as it is determined by the required quality of your final render.

    One other thing you should have pointed out is the operating system. 32 bit OS render slower than 64 bits. If you can find one that has got higher bits even better, though not so sure if that even exist. The CPU architecture is also important, single chip integrated CPU’s are slower than separate ones. Either way Cycles is expensive to run, is slower and very boring to tweak its performance than blender internal. Developers should very much improve this engine and fast!

    With all that being said, I still love Cycles despite the pain in my behind!!! :-)

  • Erik

    Speeding up Blender in general:

    Use Linux!

    • bluelightzero

      and use openbox :)

      • yumri

        use a 128-bit OS like Solaris though you will need a specially made hardware system for it the system will run faster on apps which are actually compatible at 128-bit and you will have to tweak around with blender codeing to make it 128-bit but it does run faster as fast as some of the GPUs do though saddly having a 128-bit system isn’t that good as not much can actually run besides that one OS and there are not many programs for it besides database and production.
        Point being that if you can make the CPU and GPU around the same bit width they will exsembit little to no differnce besides the main thing of haveing more to use when rendering.

        • http://gadgeticmusings.net/ Soul_Est

          Only the filesystem that Solaris uses, ZFS is 128-bit, NOT the entire OS.

      • nastys

        I see no performance differences between Unity and LXDE (which is using OpenBox Window Manager), even using the GPU.

        • bluelightzero

          “I see no performance differences”

          Now I have a new computer, neither do I really. But my old one would slow quite a bit with unity.

          • nastys

            On Ubuntu 13.04 my GPU rendered slower than my CPU. Now I’m testing Ubuntu 13.10, which is as fast as LXDE, and faster than my CPU. Of course, the PC is very slow while rendering but it renders as fast as LXDE.

  • Raghav

    Older NVIDIA GPUs are not supported in the newer versions of blender. We also need a GPU that supports CUDA 2.0 and above. Or else we’re just slowing down render, let alone speeding it up. The GPUs that are compatible are the NVIDIA GTX series, which are fairly new. So people, don’t get your hopes up, because you are not going to easily save time very soon. And that’s just a pain in the a**.

  • Fredhystair

    Hi guys
    I’ve got the same pblm that John:
    My GPU renders are more than 2 times longer than my CPU renders. I really don’t understand. Here’s my setup :
    - Win7 X64
    - latest version of Blender (downloaded on the official site)
    - My Graphic card is Nvidia GeForce GT 520 with drivers updated
    - The CUDA Toolkit is properly installed
    - Set ups made in user preferences are OK too.
    Anyone got an idea? It kinda drives me mad :-)
    Thx

    • Anonymous

      newer comment by Raghav: “Older NVIDIA GPUs are not supported in the newer versions of blender. We also need a GPU that supports CUDA 2.0 and above. Or else we’re just slowing down render, let alone speeding it up. The GPUs that are compatible are the NVIDIA GTX series, which are fairly new. So people, don’t get your hopes up, because you are not going to easily save time very soon. And that’s just a pain in the a**.”

  • John

    Hi guys.
    I’m trying to figure out why my GPU renders slower than my CPU.
    I have an Intel Core i7 processor with 4 GB of RAM (64x)
    and a NVidia GeForce GT 540M (1 GB).
    Is my GPU not enough? Is anyone else in the same situation as me but getting different results? (i.e. GPU performing better)
    Thanks

    • fergus

      If your GPU renders slower than your CPU, thats basically means either use you CPU instead of the GPU or install more powerful GPU(I strongly recommend the latter).

  • shank21101985

    Thanks for the Guide !!

  • licuador

    Two things I found to speed up renders in Cycles.

    1 turning off Progressive refine helps tremendously.

    2 The tile size doesn’t have to be square all the time. I divide the image in 8 tiles (the magic number for me is x:480 y:270 left to right for HD render)

    • nastys

      My render time using 480×270 is the same as using 512×512.

  • Joachim Otahal

    Thanks for the Guide! Sad that you did not test even bigger tiles for CUDA. On this card here with 6G VRAM a tilesize of 512×512 (31.3) or 1024×1024 (32.7) is faster than 256×256 (34.7).

  • http://www.facebook.com/philliplarue0 Phillip Larue

    hallo guys. I have a problem, on the drop down menu in “User preference” under compu device, I only see cpu but not gpu.
    I have an Intel HD Familly graphics card

    • Donald Bronson

      Intel HD means “integrated graphics.” In other words, you don’t actually have a GPU, the CPU takes care of the graphics internally. Thus, because there is no physical GPU, only your CPU shows up. You will need to buy a dedicated graphics card if you want to use GPU rendering.

      • http://twitter.com/sakjur Emil Tullstedt

        And remember to pick up a CUDA-compatible graphics card (ie Nvidia) if you don’t like to wait for OpenCL-support to become.. useable ;)

      • omikun

        Integrated graphics just means the graphics cores is in the same die as the cpu cores. Integrated graphics != software graphics. Only his CPU shows up because he does not have a CUDA device.

    • Anonymous

      Only the Nvidia GTX series of graphic cards are supported :(

  • xD

    Hey its not a Big tip but when you are ready to hit render n walk away fro a bit, use the mouse to scroll it waaay into the center, so its not zoomed into view, and this sometimes seems to give it a lil nudge.