SVOGI Implementation Details

Started by
121 comments, last by Josh Klint 1 year, 7 months ago

Color bleeding works. I found it was necessary to store the square root of the color value, to not lose the tint at low light levels. Maybe storing it as HSL would be better.

This is just using a single 256x256x256 volume. Next step is to add more stages like I was trying to do before. Probably 128x128x128 is the ideal size for each stage. Pixels that are outside the range of the volume data get lit with the standard PBR lighting, which uses a cube map for diffuse / specular reflection. This is all combined with the direct lighting, which uses shadow maps.

In that video, ambient light is set to black to make it easier to see what is happening.

10x Faster Performance for VR: www.ultraengine.com

Advertisement

By setting the sky color to bright pink, we can see how much diffuse sky color is showing through the geometry. The floor under the awning is partially obscured and shows a nice partial diffuse color:

With the same settings, the interior shows no penetration of the sky color:

Switching to our orangish skybox shows its producing the desired look:

I think as a general rule, any gather approach to GI is going to be conservative with the light propagation. You're trying to balance things to prevent light leaks, so you probably won't see light filling a room completely, just areas where it bounces off surfaces.

10x Faster Performance for VR: www.ultraengine.com

Don't get me wrong - but for me this looks like it's incorrect. Your pink color seems to be sort of occlusion term - ambient occlusion and it seems to be applied wrongly. Ambient occlusion term looks like this:

What your image with just direct and AO look like is (AO power is overdone):

But instead you're multiplying also direct light with ambient occlusion term, like (AO power is overdone, notice the black circles around directly lit vases):

Result with GI should properly look like:

Which uses GI, but not AO at all.

I.e. You should not apply ambient occlusion to diffuse light. Also you should not apply ambient occlusion randomly to result which uses GI - make sure to derive all your factors (and where to apply them) from the rendering equation, otherwise your GI will not look correctly.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Standard PBR is going to use two texture lookups in a cubemap for diffuse and specular lighting. It looks good but as soon as you add any convex geometry to the scene it falls apart, because you can see the sky reflection in places you should not be able to:

The voxel GI adds a layer of geometry that is evaluated before a cubemap sample is added to the lighting. If a voxel ray passes all the way out of the voxel area, then a cubemap sample is taken and multipled by one minus the accumulated alpha component of the ray sample. In the shot above, the pink indicates how much sky/cubemap is showing through the geometry.

If you don't mix a skybox with voxel GI, then you will have problems like a airplane in the sky won't have any reflections. So the two have to be accounted for, for a versatile renderer.

Ambient occlusion is a separate issue. I'm still working this out.

10x Faster Performance for VR: www.ultraengine.com

Here's a good photo showing how the sky is blocked by geometry, creating a sort of gray diffuse "shadow". It's not exactly a shadow, it's just an area where the diffuse sky color isn't affecting:

10x Faster Performance for VR: www.ultraengine.com

My point wasn't skylight but direct light (that generates shadows).

But yes - in terms of skylight it looks correct under the roof as you posted. I'm mostly using Sponza (which is kind of special place for GI). Let me see if I could recreate similar scene to what you have.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

If a directional light is visible at any pixel, that implies that the sky is also visible. So you're right that it does look a bit odd. I think this is probably due to the compressed range of brightness we are using in computer graphics. In the photo above, it's a cloudy day, but if a beam of sunlight cut through that dark area, it would be so bright you would no longer be able to see the distinction between the lit area under the canopy and the area around it.

If I add a rule that says “if the pixel is visible to a directional light, give it 100% sky color” then it looks like this:

10x Faster Performance for VR: www.ultraengine.com

Idk what issue you guys talk about currently, but modeling diffuse with just 3 cones is still optimistic, and i expect big error from that. You mentioned random rotation to improve this(?), but if this only causes the 3 directions to rotate around the normal, you basically sample a ‘circle’, not the whole halfspace. And you can't model cosine weighting at all.

I think the minimum would be 6 cones. One along the normal, the other five would form again a circle. But cosine weighting works: Normal cone has a weight of one, the others have a lesser weight of dot(normal, tracingDirection).

You could also do Monte Carlo like in path tracing, to break the circle shape and have just better sampling overall with any fixed number of cones.
I would do this using precomputed samples on a halfsphere, where samples are cosine weighted and also separated by poisson disk, so you do not sample directions which are too close. The sample disk could be still randomly rotated to hide the precomutated pattern.
Personally i've never done this in a progressive way, so you can accumulate with results from previous frames, although the scene is changing and camera is moving. Either you do the accumulation on voxels or in screenspace pixels (which then needs reprojection and TA)
But that's not hard to figure out i guess, and then you should also be able to sample the sky properly without relying on simplifying assumptions.
Not sure about temporal issues under motion, though.

@joshklint What happens when you tone map the image?

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

@joej A random rotation combined with a denoise filter should be enough. This is a denoise filter I used for SSAO:

@Vilem Otte Original:

Modified:

10x Faster Performance for VR: www.ultraengine.com

This topic is closed to new replies.

Advertisement