25 minutes ago, Hodgman said:If Z-fighting is an issue, then yeah I'd definitely recommend doing this.
- Make sure you're using a 32_FLOAT depth format.
- Swap the near/far params of your projection matrix creation code (e.g. instead of Projection(fov, aspect, near, far), use Projection(fov, aspect, far, near)).
- Swap your depth comparison function (e.g. replace LESS_EQUAL with GREATER_EQUAL).
- If you have any shaders that read the depth buffer (e.g. deferred lighting reconstructing positions from depth), then fix the bugs that this has introduced to that code).
The link I posted earlier explains why this is magic.
But quickly -- z-buffers store z/w, which is a hyperbolic curve that focuses most precision on values that are close to the near plane (something like 50% of your depth buffer values cover the range of (near, 2*near]!!), and floating point formats do a similar thing -- they're a logarithmic format that focuses most precision on values that are close to zero. If you simply use floating point format to store z/w, you make the problem twice as bad -- you've got two different encodings that both focus on making sure that values next to the near plane are perfect, and do a bad job of values next to the far plane... So if you invert one of the encodings (by mapping the far plane to zero), then you've now go two encodings that are fighting against each other -- the z/w hyperbolic curve is fighting to focus precision towards the near plane, and the floating point logarithmic curve is fighting to focus precision towards 0.0f (which we've mapped to the far plane). The result is that you end up with an almost linear distribution of values between near and far, and great precision at every distance.
Definitely been using 32bit for z buffer, no use for stencil in what I have been doing.
I swapped the depth, around, and updated the less/greater comparison. But getting no rendering atm. trying a couple of things. will turn off z test all together just to see what happens. Will let you know