Rendering Order, Blending and Performance.

Started by
2 comments, last by deadc0deh 4 years, 11 months ago

So I've heard that rendering front-to-back can increase performance due to the z buffer. I understand how this works but what happens to blending? In order for blending to work you need to render from back-to-front. Moreover:

For blending, the technique I know and use, is that in every frame I calculate the distances of all the objects from the camera, then order the objects in descending order from larger distance to smallest and then render first, the objects which are farther from the camera.

This has two disadvantages, first the calculation of the distances and the ordering take some time and secondly I can't use front-back rendering. The first disadvantage can be optimized with a tree queue quite well but still I need to go through all the objects in order to generate that tree (nlogn total time).

Is there a way to mix this blend technique with front-end rendering? or am I stuck except if my scene does not use transparent objects at all? 


void life()
{
  while (!succeed())
    try_again();

  die_happily();
}

 

Advertisement

What is usually done is that you separate opaque objects from transparent ones before rendering. Then you draw all opaque objects in front to back order. Afterward, draw the set of transparent objects from back to front. You can keep lists or separate them during the ordering of the objects. It's just one additional condition.

However, there is another Issue: If you have transparent objects which intersect, you won't get a correct back to front order, regardless of what you do. So there is another technique called Order Independent Transparency (https://en.wikipedia.org/wiki/Order-independent_transparency). This might also be an alternative for you.

Greetings

Quote

So I've heard that rendering front-to-back can increase performance due to the z buffer.

Might be worth to note that for the front-to-back opaque objects the order does not have to be perfect. One of the main principles of this optimization is that you populate the depth buffer so that any fragments (that don't change the depth in the shader) can prevent the fragment/pixel shader from running at all. In particular there is usually an acceleration structure such as HTile to make this very efficient for multiple fragments per clock. (e.g. see http://developer.amd.com/wordpress/media/2013/10/evergreen_cayman_programming_guide.pdf chapter 7.3)

This allows you to simply use a dot product between the object's origin and the camera's view direction. (You may have to subtract the camera's eye location if you're dealing with large scenes) (and obviously this works best if you don't have huge objects) Further you could easily limit your set of objects that you have to sort by marking large occluding objects and rendering only them front-to-back... once most of the depth buffer is populated a lot of the smaller objects will never make it to the fragment shader so it doesn't matter if they go front-to-back or not. This can severely reduce the set of objects you have to sort. It is also common place to perform a 'light weight sort' by storing say struct { float distance, int index } and then later using just index to walk through the actual objects. This is particularly beneficial if you're not sorting pointers, but heavy weight objects as it limits memory traffic for the sort. (Radix sort works quite well here.)

Note that you will still need to sort all transparent objects back-to-front. It is worth noting that not 'all' objects with alpha need to have this. In particular a lot of particle systems render with additive mode (src = one, dst = one) and this is render order independent and hence no sorting is required. 

As user DerTroll points out:

Quote

However, there is another Issue: If you have transparent objects which intersect, you won't get a correct back to front order, regardless of what you do. So there is another technique called Order Independent Transparency (https://en.wikipedia.org/wiki/Order-independent_transparency).

But it might be worth to note that Order Independent Transparency is usually quite memory hungry. If you are willing to opt for a multi-pass algorithm it might be worth looking into a technique called "depth peeling". It depends a little on the scenario and the type of objects your are trying to render, but knowing yet another technique doesn't hurt.

Hope this helps

This topic is closed to new replies.

Advertisement