HBAO/GTAO casting too much occlusion from thin objects

Started by
8 comments, last by Frantic PonE 5 years, 3 months ago

Hi,

I am currently implementing the ambient occlusion technique described in the paper & presentation "Practical Realtime Strategies for Accurate Indirect Occlusion" (aka Ground-Truth Ambient Occlusion)

The results so far are pretty good, I have found one edge case though where I think the technique breaks down and I'm wondering if there is a solution that can solve the problem or if there is no good way to solve this with depth-buffers alone.

The problem occurs with thin elongated objects, they cast way too much occlusion especially when viewed from a flat angle.

In the below images I put a thin pole at a ~20° angle that sticks into the floor plane, for reference there also is a big cube that touches the ground plane, as well as a very thin wall (with similar thickness as the pole)

In the first two images you can see that there is too strong of a occlusion where the pole comes close to the ground plane when viewed from a flat angle.

When viewed from above (the second pair of images) the occlusion looks more reasonable (much weaker)

As far as I understand it, the problem is caused because the HBAO/GTAO algorithm searches for the horizon angle, but it has no knowledge of discontinuities in the depth-buffer. Therefore in the below case for the fragments of the floor that are below (in Y) AND behind (in Z) of fragments of the thin pole object, will find the steepest horizon angle on the first fragment that lies on the pole object. Therefore the visible part of the hemisphere [V1] is then only defined by the horizon angle that was found on the ground plane [h1], and the second angle points towards the front-most fragment of the pole-object [h2].

That means that the total determined visibility (disocclusion) over the hemisphere is missing the entire visible back half [V2] ... spanning between the floor behind the pole [h4] and the back-tangent horizon onto the pole [h3].

At least that's what I think is the cause for the problem. Is my thinking correct so far ? and if yes, are there ways to fix this issue with the GTAO/HBAO approach ?

I already tried out some ideas, that I have come up with, utilizing a back-face Z-Buffer and full on depth-peeling with multiple-layers, but nothing so far has solved the problem without introducing problems for the regular AO cases.

Any hints would be much appreciated, Thanks.

Scematic

scematic.png.907187f35c49f4f4dc6e6f3ab5637ad9.png

Side-View RGB

01_rgb_front.thumb.png.5988b6a7a4a14bb8270e25432fe9d882.png

Side-View AO (the thin pole casts way too much occlusion onto the ground plane)

02_ao_front.thumb.png.27f5b14e3acc3864592e8b8a5177ff94.png

Top-View RGB

03_rgb_top.thumb.png.f547a50089d5b86a0140f1967d1698a3.png

Top-View AO (from the top perspective the problem is not noticable as much)

04_ao_top.thumb.png.75948e20146fc7bd002ac4abe7f79a1e.png


 

Advertisement

Yeah I've seen some research with layered depth (but can't think of the paper names....) 

A last resort would be to draw these problematic objects after computing AO...

Or writing some kind of color/stencil value per pixel that says whether that pixel belongs to a problematic object. Then when computing horizons, if any 'problematic-flagged' depth values are used, artificially lighten the resulting AO value. 

Importance layered depth is neat. FP16 bit depth, because you don't need ranges beyond your SSAO anyway, then just do another depth layer, K-buffer or whatever, in tiles around the edges of each object. Results can be pretty nifty and much more temporally stable, but that doesn't mean it's not costly still.

The original GTAO paper actually has a separate hack for this exact scenario, which is to somehow lessen contribution from thin objects. It's been a while so I don't remember the details, but it should be in the paper yes? Or is it a different version of the same paper? Ok yeah, glancing through it they have "thickness heuristic" hack for this exact scenario. Is this not working?

3 hours ago, Hodgman said:

Yeah I've seen some research with layered depth (but can't think of the paper names....) 

Perhaps you're thinking of Morgan McGuire's research?

https://research.nvidia.com/publication/deep-g-buffers-stable-global-illumination-approximation
https://research.nvidia.com/publication/lighting-deep-g-buffers-single-pass-layered-depth-images-minimum-separation-applied
https://research.nvidia.com/publication/fast-global-illumination-approximations-deep-g-buffers

Here's another one: http://graphics.cs.aueb.gr/graphics/docs/papers/MultiviewAmbientOcclusion.pdf

17 hours ago, Frantic PonE said:

The original GTAO paper actually has a separate hack for this exact scenario, which is to somehow lessen contribution from thin objects. It's been a while so I don't remember the details, but it should be in the paper yes? Or is it a different version of the same paper? Ok yeah, glancing through it they have "thickness heuristic" hack for this exact scenario. Is this not working?

Yes, I tried to implement the described heuristic, but it didn't change the resulting occlusion that much in the above case for me (maybe I did something wrong, I'll probably need to give it another try)

 

Multiview AO looks really interesting, I already thought about this myself, if it would be possible to reuse other depth information that is already present in the form of shadow-maps to fill in some of the otherwise absent AO. I will definitely have to look through that paper.

Thanks everyone.

Forgot to mention one thing about the heuristic why I think it would not work properly, already for only a slightly modified scenario from the one above. If I would make the "thin pole" object in the above scene a very "thin wall" like object (just like the other one already in the scene) but with the same orientation and just barely touching the ground, it would still cast exactly the same occlusion even when using the heuristic as I understand it from the paper. That's because the heuristic depends on the assumption that an object's depth is very similar to its screen-space width, which would not be the case for the mentioned geometry. So this is not really a viable solution for me. ?

What leaves me worried is the following excerpt from the presentation notes and the corresponding slides (43-45)

Quote

We can calculate the ambient occlusion integral as a double integral in polar coordinates.

The inner integral integrates the visibility for a slice of the hemisphere, as you can see in the left,
and the outer integral swipes this slice to cover the full hemisphere.

The simplest solution would be to just numerically solve both integrals.

But the solution we chosen, horizon-based ambient occlusion, which was introduced by Louis Bavoil in 2008,
made the key observation that the occlusion as pictured here can’t happen when working with height fields.

Using height-fields we would never be able to tell that the areas in green here, are actually visible.

The key consequence of this, is that we can just search for the two horizons h1 and h2 and that captures all the visibility information that can be extracted from a height map, for a given slice.

hbao_limitations.thumb.png.16479e8de8896f50fa02cc10e561ad6e.png

I had a look at the presentations & pdf provided by Louis Bavoil in 2008 but was unable to find any details to confirm or disprove the above statement in their material. But my many failed attempts at finding a solution for this problem leave me wondering if I'm trying to solve the impossible (using z-buffers as the only source of information about the scene) ... i.e. figure out the green area(s) in the above picture.

Yeah it's impossible unless you depth slice a ton or whatever.

Point is SSAO is for very small radii ambient occlusion, one where you can just assume it's a heightfield and probably be correct. .2 or .1 meter radius, depending on if you're first or third person, depending on if your using human character standard units, is typically the max radius you'd want to use for SSAO, at least IMO, some people use more, but the larger you go the more obvious the errors are.

This topic is closed to new replies.

Advertisement