SVOGI Implementation Details

Started by
121 comments, last by Josh Klint 1 year, 7 months ago

Blending like trilinear texture filter should just fix that? Worked well for me.

Edit: I still do such blending of two LODs for my current surfels approach too. Lots of complexity, and it practically halfs detail, but it's worth it.

Btw, how do you do voxelization? Unpacking from static SVO? Or generating it in realtime?

Advertisement

JoeJ said:

Blending like trilinear texture filter should just fix that? Worked well for me.

Edit: I still do such blending of two LODs for my current surfels approach too. Lots of complexity, and it practically halfs detail, but it's worth it.

Btw, how do you do voxelization? Unpacking from static SVO? Or generating it in realtime?

https://www.ultraengine.com/community/blogs/entry/2751-gpu-voxelization/

10x Faster Performance for VR: www.ultraengine.com

The errors I showed earlier were because the indirect lighting was just messed up and completely wrong.

I now have it working correctly, but there are still some pretty big discrepancies between stages:

Even if some cross-fading is added, that is still going to stick out pretty badly.

10x Faster Performance for VR: www.ultraengine.com

I can see the GI looks much more correct. Big difference maybe, but hard to say. Personally i really like Cornell Box scene to verify this.

But i can't see cross fading. It still switches hard between 2 meter blocks in the video. To hide the transition, you'd need to fade across a range of 4-8 meters i guess.
I do the fading across the entire range of a level of detail. That's the only way to really hide it, but the loss on detail is big and hurts.

I remember it was pretty hard to get my cross fade right.
Not sure how it works for voxels. We could assume you only need to average 8 child voxels to get the presumed parent color, and then blend all children towards that average. But not sure how to deal with tracing.
In the worst case, you'd need to process the overlap for both detail levels and blend only the results. This increases the cost by 1/7 i guess, so not that bad. Could be at least a reference implementation.

To make a transition work, you need to interpolate between two things that are not too different. Event with a smooth transition, I can't see any way to resolve these types of differences:

Even if that transition was smoothed, it would still look terrible.

10x Faster Performance for VR: www.ultraengine.com

I think I will have to spend a couple weeks getting the indirect lighting calculation as close as possible between stages of different resolutions, look at the final result, and see if they are close enough to interpolate between stages. If the results are bad, then I think one big volume texture will be the only way to go.

10x Faster Performance for VR: www.ultraengine.com

Josh Klint said:
Even if that transition was smoothed, it would still look terrible.

Not terrible. In the video above you only move forth and back say 20 cm. If your fading range is 4m, you would see a visual difference of 5% within that movement. You might eventually notice a darker gradient starts to fade in at the ceiling, but it would not be harsh, and that's all you can do.

I made a picture to make sure we get each other right:

Left side has half the resolution of right side, and the gradient on top shows the blending factor. The moving camera causes only gradual changes, exactly like trilinear texture filter.

It won't be as easy for you as such a filter, but that's the goal at least.
What people often do wrong imo, is to use a blending gradient only near to the boundary of the higher LOD. This way they get more detail, but the transition becomes visible. If we make the gradient properly over the whole range, no transition is visible, just constant change while moving.

There are some open source engines around with VCT. I remember Castor3D and Wicked Engine. Maybe they have cascades too and you could take a look what they did.

Josh Klint said:
If the results are bad, then I think one big volume texture will be the only way to go.

No, i don't think so.

Imagine we have two LODs of 64^3 voxels, which gives us the same space as a huge 128^3 volume.

The easy (and probably most practical) solution would be to compute the lower LOD without a hole in the middle, which would be covered by the higher LOD.
But after both lods are computed, you can blend the results like with my picture, and even if their results differ a lot (they will), gradual changes will hide the boundary and look fine.

The only complexity and issue i see is the higher lod needs to trace into the distant lower lod, which will cause popping discontinuities within the volume as it moves through the scene. But the blending will hide them too for most, as the worst changes are near the boudary which has a low weight within the final blending.

The issues I showed above was due to errors in the transition between stages. When I transitioned between stages, I was not using the right mipmap level. The results should actually be pretty close (but not exact) because by the time you move into the next stage, you have usually moved up one LOD level. In other words, the first mipmap level of the second stage is the same size as the second mipmap level of the first stage.

This feature has been very frustrating, but the results really do compete with hardware raytracing, and performance is so much faster:

10x Faster Performance for VR: www.ultraengine.com

This topic is closed to new replies.

Advertisement