SVOGI Implementation Details

Started by
121 comments, last by Josh Klint 1 year, 7 months ago

Josh Klint said:
but the results really do compete with hardware raytracing, and performance is so much faster:

pssst… keep silent about that. People are meant to work half a year so they can afford 600W GPUs to get the same thing, but with a shiny green badge on the lower right! :P

Advertisement

I am pretty happy how this is turning out. This is a very dark room with a bright scene outside. Very hard to get right:

10x Faster Performance for VR: www.ultraengine.com

I'm having problems with light leaks in one situation. The samples never get fully blocked, and light bounces off the doorway into another room:

10x Faster Performance for VR: www.ultraengine.com

JoeJ said:
This truly, truly sucks. Prepare for never ending source of voxel frustration… ; )

:D

Beside tracing more and narrower cones, i only see one way to fix this: Switch to a diffusion approach instead a gathering approach.
The light would then diffuse from the lit window to the wall, and it would be properly blocked.

That's one of the two voxel ideas i had tried back then, which was before Crassins paper came out. (The other was Michael Bunnells mind blowing idea to replace visibility with anti radiosity, but this has it's own leaking issues and does not work for interiors.)
There are two problems:
1. Speed of light is slow. Likely you do one diffusion step per frame, so light travels only one voxel per frame. If we talk only about indirect, that's maybe acceptable. You can see it in Dostals work, which seems a good implementation of the idea:

2. Which angular basis to choose? Ambient Cube would be the obvious choice. But i wanted to avoid angular quantization, so i tried SH2 (Crytek used this too for LPV). But SH2 can not represent light flow from opposing directions well. So i tried SH3, which is better, but much more expensive and still not good enough to compete the detail we get from Bunnells proposal. Here a reminder on Bunnells work, which i guess did run on xBox 360. Notice the very high quality:

It's a shame this never was used in a game. Not sure what went wrong. All that's left is this video, a patent, and some GDC talk.

Back on topic, maybe something like ambient dice or spherical gaussians would crank the diffusion idea up. But that's high memory requirements multiplied by already demanding volume representations, which do not cope well with dynamic scenes due to quantization issues. Thus i personally gave up on voxels and went back to surfels.

Probably all you can do is accepting such artifacts. Or you make more samples where space becomes dense, which would lead us to SDF ideas. Notice we can approximate SDF with a exponential sums of density mips.
That's an idea, maybe…

The first video above reminds me of some high-speed video I saw that captured way photons move. Light actually travels a lot like water:

The cone step trace approach works great for specular reflection, using a single narrow ray. Even if the ray is widened for a blurry reflection, the reflection is so indistinct it hides artifacts well.

Currently, I am using a single wide cone for diffuse. I like this because it's really fast and uses very few samples. I tried breaking the cone up into four sub-samples and tracking their alpha values independently, but it did not reduce the problem much. Possibly my code was not working correctly, but my gut says that approach would not be satisfactory.

I will think about doing diffuse with light diffusion instead of gathering samples. The cone step tracing code is not a waste, because that will still be used with specular reflection (which diffusion would not be able to handle).

10x Faster Performance for VR: www.ultraengine.com

Josh Klint said:
The first video above reminds me of some high-speed video I saw that captured way photons move. Light actually travels a lot like water:

Ha, i thought the exact same when i saw such a video too. : ) Still wonder how they record this.
But light diffusion is easier than water, as it does not advject its own velocity field.

Josh Klint said:
Currently, I am using a single wide cone for diffuse. I like this because it's really fast and uses very few samples. I tried breaking the cone up into four sub-samples and tracking their alpha values independently, but it did not reduce the problem much.

Getting away with just one cone is really optimistic. I think people used something like 5-8 in practice. Which would help if your walls are not very thin, so lifting the leakage problem to the larger distance.
To do a reasonable approximation of the cosine weighted half space, you need more cones anyway, so you have two reasons to accept more cones.

Josh Klint said:
I will think about doing diffuse with light diffusion instead of gathering samples.

Would be very interesting! The idea to just interact each cell with their neighbors sounds more efficient than tracing, in theory.

Your fluid example gives me an interesting idea. The key problem is: How can we represent light moving in all directions? Fluid always moves in only one direction?

But beside the conventional vector field advection, there is also a flux based advection method, commonly used in games to do things like 2D heightfield waves in water puddles. ('Shallow Water equations')
And flux can go both left and right, not just left or right.
I did this in 3D too, using 6 global axis directions, which is the same as ambient cube.
It's good to conserve energy, and does not need something like a divergence free vector field to ensure this. So maybe that's useful for propagating light as well. But not sure - maybe it's actually a stupid idea. I've learned about this from teh water simulation used here: https://github.com/LanLou123/Webgl-Erosion

How would you start the light scattering routine? I calculate direct lighting in the voxelization step. I suppose then you would want a cone of light to bounce off the lit voxel, but how do you keep track of which cells have light moving through them, and update it gradually each frame?

I don't think light injection would help with the PBR skybox diffuse sampling. I am currently multiplying the diffuse skybox lookup by one minus the alpha of the cone step result, as this shows how much of the sky is showing through the scene.

Switching to five diffuse rays instead of one doesn't totally eliminate the problem, but it does provide much softer indirect lighting. This is an extreme case of a pitch-black room next to a brightly lit area, and it looks not too bad:

10x Faster Performance for VR: www.ultraengine.com

Josh Klint said:
How would you start the light scattering routine? I calculate direct lighting in the voxelization step. I suppose then you would want a cone of light to bounce off the lit voxel, but how do you keep track of which cells have light moving through them, and update it gradually each frame?

Your picture would be correct only for specular materials, but for GI it's more important to get rough materials right.
Realtime GI usually means ‘treat every surface as simple perfect diffuse. But if possible, generate directional probes so we can shade our PBR materials properly’.
This allows for optimization and really is good enough. It would fail only in extreme cases, like calculating indirect lighting in a labyrinth of mirrors.
The scattering of light reflected from any surface then is just the opposite of integrating cosine weighted samples when shading it. It should look like this:

So a constant value of bounced intensity forms a perfect sphere touching the emitting surface. (But i have forgotten about the math formula relating distance to intensity. Probably it's 1 / squared distance.)
The image shows most intensity near the surface in read, then decreases to orange and yellow. It becomes zero only at infinity.
If you would move along the boundary of the yellow circle, and keep looking at the emitting surface, you would always see the same constant intensity of reflected light times the surface color.
Which is also the reason of why diffuse light ‘does not depend on eye vector’. Notice, for the same reason, the direction of incoming light does not matter either. Only the surface normal matters. Because of our 'everything is diffuse’ assumption.
I don't think we could practically handle sharp reflections with diffusion. We would need high angular accuracy, like high res cube maps. Making everything diffuse surely is required to avoid such costs.

So, what you want is to make sure your diffused energy creates such spherical patters. Which differs from the usual diffusion goal of creating circular patterns but with the emitter at the center of the circles.
The latter ignores direction, but you need the normal direction to offset the spheres so their surface touches the emitter.

I can not remember how i did this. I looked up Cryteks LPV papers, but did not understand them well enough. So i had to figure it out myself. Results were not perfect. There were issues like snapping to major axis, energy loss and such things.
EDIT: The major problem was that you know nothing about the emitters position, normal or distance. So my image and explanation only helps to make a reference to proof your diffusion algorithm.

Josh Klint said:
I don't think light injection would help with the PBR skybox diffuse sampling. I am currently multiplying the diffuse skybox lookup by one minus the alpha of the cone step result, as this shows how much of the sky is showing through the scene.

Agree to that. I want to do the same alpha based method to handle a global environment map to my surfel probes, but i have not done this yet.

Josh Klint said:
Switching to five diffuse rays instead of one doesn't totally eliminate the problem, but it does provide much softer indirect lighting. This is an extreme case of a pitch-black room next to a brightly lit area, and it looks not too bad:

Yeah, that's not bad.
It would be a problem only if someone switches the sun on or off. You would wonder what's going on, if you were in the dark room.

JoeJ said:
pssst… keep silent about that. People are meant to work half a year so they can afford 600W GPUs to get the same thing, but with a shiny green badge on the lower right! :P

What green badge and RTX dependency? Laughs in custom real time path tracer!

Sorry, I had to!

Propagation Techniques

As much as they sound good, their propagation speed is a major problem. They tend to be somewhat good for light sources that are either static or very slowly moving. Also you should not turn them on/off. Possibly also for subtle lights. Therefore they are wrong choice for illumination of interiors, and anywhere where there are dynamic lights in general. At the point where propagation technique is applicable - using progressive techniques (or even precomputation) might be better choice. Why? They have same problem with highly dynamic scenes, but in general yield much better quality of resulting lighting (and yes, possibly even without voxelization or light bleeding problems - personal opinion, even dynamic light mapping beat my attempt to LPV like approach in terms of quality and performance).

Light Bleeding

Is a nightmare of voxel based techniques. There are approaches for this (like F.e. directional data in voxels, or yes - denser voxel data, that always works at cost of much higher memory usage).

Level-of-Detail differences

I did hide them somewhat successfully with multiple cascades, or at least to the level I though was visually acceptable. Comparing to my real time path tracer (which yields superior quality, but at the cost of noise), I'm still somewhat satisfied with vxgi - it offers SMOOTH global illumination. The noise in my opinion (in path tracing) tends to be way too distracting for the player.

Side note: One of the proposals I did in my current game project and tested it in (which will probably never get finished, as it is my playground) was to completely separate interior cells from exterior cells, much like you can see in The Elder Scrolls series. Exterior cells would then possibly use either cascaded vxgi or completely different approach!… while interior would use hand placed grid(s) by artist (vxgi tends to beat other solutions in terms of quality and mainly smoothness for interiors). This has huge advantage of directly seeing results as in game within editor - so anyone creating/editing current area can specify whether grid needs to be sparser/denser, etc. This does also solve major problem of light bleeding between exterior and interior!

There are few major disadvantages though - first of all definition of exteriors and light coming from exterior to interior (I would need a proper clever way to do this). Second, and that's much harder to cope with and unrelated to GI itself, is introducing loading screen between interior and exterior (and all impacts of that - for gameplay, AI, time of day, etc.).

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

This topic is closed to new replies.

Advertisement