Advertisement

24bit depthbuffer is a sub-optimal format?

Started by August 19, 2017 11:58 AM
29 comments, last by matt77hias 7 years, 3 months ago
25 minutes ago, Hodgman said:

If Z-fighting is an issue, then yeah I'd definitely recommend doing this. 

  1. Make sure you're using a 32_FLOAT depth format.
  2. Swap the near/far params of your projection matrix creation code (e.g. instead of Projection(fov, aspect, near, far), use Projection(fov, aspect, far, near)).
  3. Swap your depth comparison function (e.g. replace LESS_EQUAL with GREATER_EQUAL).
  4. If you have any shaders that read the depth buffer (e.g. deferred lighting reconstructing positions from depth), then fix the bugs that this has introduced to that code).

The link I posted earlier explains why this is magic.

But quickly -- z-buffers store z/w, which is a hyperbolic curve that focuses most precision on values that are close to the near plane (something like 50% of your depth buffer values cover the range of (near, 2*near]!!), and floating point formats do a similar thing -- they're a logarithmic format that focuses most precision on values that are close to zero. If you simply use floating point format to store z/w, you make the problem twice as bad -- you've got two different encodings that both focus on making sure that values next to the near plane are perfect, and do a bad job of values next to the far plane... So if you invert one of the encodings (by mapping the far plane to zero), then you've now go two encodings that are fighting against each other -- the z/w hyperbolic curve is fighting to focus precision towards the near plane, and the floating point logarithmic curve is fighting to focus precision towards 0.0f (which we've mapped to the far plane). The result is that you end up with an almost linear distribution of values between near and far, and great precision at every distance.

Definitely been using 32bit for z buffer, no use for stencil in what I have been doing.

I swapped the depth, around, and updated the less/greater comparison.  But getting no rendering atm.  trying a couple of things. will turn off z test all together just to see what happens.  Will let you know

Indie game developer - Game WIP

Strafe (Working Title) - Currently in need of another developer and modeler/graphic artist (professional & amateur's artists welcome)

Insane Software Facebook

You should also clear the depth buffer to zeroes instead of 1.0f

Advertisement
6 hours ago, Hodgman said:

If Z-fighting is an issue, then yeah I'd definitely recommend doing this. 

  1. Make sure you're using a 32_FLOAT depth format.
  2. Swap the near/far params of your projection matrix creation code (e.g. instead of Projection(fov, aspect, near, far), use Projection(fov, aspect, far, near)).
  3. Swap your depth comparison function (e.g. replace LESS_EQUAL with GREATER_EQUAL).
  4. If you have any shaders that read the depth buffer (e.g. deferred lighting reconstructing positions from depth), then fix the bugs that this has introduced to that code).

The link I posted earlier explains why this is magic.

But quickly -- z-buffers store z/w, which is a hyperbolic curve that focuses most precision on values that are close to the near plane (something like 50% of your depth buffer values cover the range of (near, 2*near]!!), and floating point formats do a similar thing -- they're a logarithmic format that focuses most precision on values that are close to zero. If you simply use floating point format to store z/w, you make the problem twice as bad -- you've got two different encodings that both focus on making sure that values next to the near plane are perfect, and do a bad job of values next to the far plane... So if you invert one of the encodings (by mapping the far plane to zero), then you've now go two encodings that are fighting against each other -- the z/w hyperbolic curve is fighting to focus precision towards the near plane, and the floating point logarithmic curve is fighting to focus precision towards 0.0f (which we've mapped to the far plane). The result is that you end up with an almost linear distribution of values between near and far, and great precision at every distance.

That's very cool, I wasnt even aware of this technique.  I'll have to try it out sometime.  

Do you know what the performance difference is (on modern hardware) of using an F32 depth buffer format versus the traditional D24S8 one?

16 hours ago, turanszkij said:

You should also clear the depth buffer to zeroes instead of 1.0f

That's probably the issue!

 

On a side issue, when looking into my stencil state, i realised I haven't been setting it at the start of the frame.  So its relying on the default for the driver.  That was bad, if anything from this I have found a potential bug.

Indie game developer - Game WIP

Strafe (Working Title) - Currently in need of another developer and modeler/graphic artist (professional & amateur's artists welcome)

Insane Software Facebook

Well I did a test run, and the yes, the clearing of the Z Depth was 1.0f, when it should of been 0.0f (as its the back now!).

But something weird was happening, my models were rendering back to front  (pixel wise) which contradicted my terrain that was rendering the right way.  Even shadows after the change worked.  Couldnt track the source of the issue, as the models used the same projection matrix.  The Z test was just backwards on models only...

Indie game developer - Game WIP

Strafe (Working Title) - Currently in need of another developer and modeler/graphic artist (professional & amateur's artists welcome)

Insane Software Facebook

On 20/08/2017 at 4:44 AM, ErnieDingo said:

On a side issue, when looking into my stencil state, i realised I haven't been setting it at the start of the frame.  So its relying on the default for the driver.  That was bad, if anything from this I have found a potential bug.

For what it's worth, the runtime enforces the default, not the driver, so it's well-defined across all hardware.

Advertisement

I am also trying to do the reversed depth buffer approach but there is some confusion on my part. 

In the object rendering shaders, I have a float4 pos2D : TEXCOORD which is filled out the same as the float4 pos : SV_POSITION in the vertex shader.

Now in the pixel shader, I am getting the pos attribute with its Z component divided by W because it is a system attribute. The pos2D attribute however, doesn't get the division so I have the linear depth from 0 - to far plane in the Z component. This works fine with the regular depth buffer. In the reversed depth buffer approach I am getting very strange values. There is 0.0f on the far plane and increasing values to the near plane but the closest values here are 0.100f values. I am somewhat confused, how should I get the linear depth here?

I guess this is applicable to OpenGL as well, right?

On ‎2017‎. ‎08‎. ‎22‎. at 10:44 PM, turanszkij said:

I am also trying to do the reversed depth buffer approach but there is some confusion on my part. 

In the object rendering shaders, I have a float4 pos2D : TEXCOORD which is filled out the same as the float4 pos : SV_POSITION in the vertex shader.

Now in the pixel shader, I am getting the pos attribute with its Z component divided by W because it is a system attribute. The pos2D attribute however, doesn't get the division so I have the linear depth from 0 - to far plane in the Z component. This works fine with the regular depth buffer. In the reversed depth buffer approach I am getting very strange values. There is 0.0f on the far plane and increasing values to the near plane but the closest values here are 0.100f values. I am somewhat confused, how should I get the linear depth here?

Ok, I have finished implementing the reversed z-buffer. I am now using the pos2D.w attribute for the object shader linear depth.

Also, this technique is a HUGE improvement. Previously when I had set the camera far plane to 5000 world units with a big scene, I had Z-fighting everywhere in the distance. Now with the reversed Z (32bit floating point), I can't even see any z-fighting anywhere. I am definetly leaving the 24 bit depth buffer for this!

On 8/23/2017 at 1:59 PM, tuket said:

I guess this is applicable to OpenGL as well, right?

Yes, but GL defaults to depth values being in range [-1; 1] which breaks it (reversed Z Buffer works if the range is [0; 1]).

You can make it work by overriding that default by calling glClipControl with GL_ZERO_TO_ONE. But you need OpenGL 4.5 or the extension GL_ARB_clip_control to get it to work. If neither the extension or OpenGL 4.5 is present, you can't call glClipControl and the reversed Z buffer trick won't improve precision.

This topic is closed to new replies.

Advertisement