How to handle lighting for dynamic objects using static lightmaps?

Started by
5 comments, last by CDRZiltoid 1 year, 2 months ago

I've been working on a simple fps game which uses a custom c++/opengl engine for some time now [link to youtube video]:

I've reached a point where I need to decide what the best way would be to implement lighting for dynamic objects. The current build utilizes static lightmaps for static geometry (and doors and lifts, but that may need to change in the future as it does look a little off once they begin their translations if you're looking closely...as one would expect). I'm trying to decide if I should add functionality for fancy spherical harmonic light probes or attempt another solution. At one point I had considered doing away with the lightmaps and falling back to the classic sector based lighting which would simplify the problem, however I would loose the per-pixel lighting and shadows I get from the light maps and any "shadows" would have to be implemented at the geometry level and that would mean greater attention would need to be paid when designing levels.

I had read from a few different sources where people had solved this by casting a ray downwards from the player position and sampling the lighting directly underneath the player. I see where that would work for the player character, but when you account for the enemies and other dynamic objects in the scene that would mean needing to do this for every moving object which would wind up being quite expensive I would think.

Another potential solution I had thought of would be to insure that the level geometry all conformed to a strict grid layout (which it mostly does at this point) and then create "ghetto" light probes at the floor positions for each grid coordinate, bake lightmaps to those objects as well, then average the values at build time and load them into a 2d array which could be sampled at runtime based on the coordinates of the dynamic object in question...although as I'm typing this I'm realizing that would limit level design to be more doom-ish where you wouldn't be able to have traversable geometry on top of other traversable geometry...unless perhaps I used a 3d array...hmmm.

What I'm after is some way to accomplish this without complicating the level design process while retaining an acceptable effect.

Curious to know how others might solve this problem. Any ideas/suggestions greatly appreciated! :)

Advertisement

CDRZiltoid said:
I'm trying to decide if I should add functionality for fancy spherical harmonic light probes or attempt another solution.

CDRZiltoid said:
Another potential solution I had thought of would be to insure that the level geometry all conformed to a strict grid layout (which it mostly does at this point) and then create

Sounds like both is the same idea of a standard probe grid, which is what i would do.
You have a uniform grid (can be 2D or 3D), e.g. one probe each square/cube meter. (Eventually you need some sparse grid for larger levels, which can be as simple as on level of indirection so only some grid cells point to an actual probe).
For a pixel, you do a linear combination of the 8 probes at the corners of the current grid cell the pixel is in (same as 3D texture sampling).
To avoid artifacts from probes ending up in solid space, e.g. inside a wall, you project the probe to the closest surface an a bit inside empty space before you bake it, so it does no end up just black.
For the baking, you could render one cube map per probe, and then integrate the cube map texels to your probe format, e.g. SH2 or Valves Ambient Cube (used in HL2).
The math for both those formats is easy. It becomes ‘fancy’ only with higher order spherical harmonics, eventually. But for GI, SH2 is good enough.
Ambient cube is even simpler. Basically a cube map with each face having just one single color, which you accumulate using dot products on the 3 world axis.

After that you can sample static GI at any point and at any direction. But with a low spatial resolution of probes, ofc. there will be some light leaking here and there. Thin walls can be a problem.

If you want reflections too, you need high angular resolution for the reflection probes, thus usually cube maps. Because this needs memory, it's common to have reflection probes at much lower spatial resolution, e.g. just one probe for each room, eventually manually placed, not using a grid.

If you remember HL2 or play it again, you can see they used the probe grid for many static objects. Not everything is light mapped. That's a good example of the quality you can expect.

CDRZiltoid said:
I had read from a few different sources where people had solved this by casting a ray downwards from the player position and sampling the lighting directly underneath the player.

Iirc, Quake did this. Problem is it does not give you any angular information, just a local constant ambient term. So that's really low quality.

CDRZiltoid said:
What I'm after is some way to accomplish this without complicating the level design process while retaining an acceptable effect.

The probe grid can be fully automated easily.
But if your levels are large and memory becomes an issue, it's common to place grids manually. So you fit a box inside a room, set it's resolution to 5x5x2 probes for example, and repeat this process to fill the entire level with probes where needed.
But then the lookup is no longer trivial. Either you have some acceleration structure to find the grids affecting a pixel, or you render a bounding box of the grid and accumulate it to the framebuffer in a deferred renderer.
That's at least the options i see. This topic is rarely discussed in detail, so idk what people do.

@JoeJ This is excellent information! Thank you ?. Reading up on “Valves Ambient Cube” at the moment.

**EDIT**

Found the following pdf that explains valves “Ambient Cube”s…and from what I can tell that makes complete sense and appears to be pretty simple. Think I'm going to start on implementing an mvp example. We'll see how it goes! Thanks again ?

Was able to get a proof of concept working in my engine with minimal effort. Now I'm working on figuring out what the “best” way will be to efficiently determine which probe should be used based on the location of each dynamic object. The naive approach would be to simply take the position of each dynamic object and iterate over each light probe location comparing the distance between them all, WHICH would probably work just fine for my use case as long as I was only calculating the dynamic objects which need to be calculated each tick (for example don't calculate for dead enemies or occluded objects).

However, the part of me that's prone to premature optimization wants to incorporate “giVolumes” which would encapsulate the “giProbes” so that the distance calculations would be limited to the “giProbes” within the “giVolume” that the dynamic object was currently within. That would also handle the case where if an object was closest to a particular probe, but that probe was occluded by a wall, as long as that probe was outside of the volume the dynamic object was currently within, it would not be considered for selection. Only downside will be that these aabb volumes will need to be added manually, further complicating the level design process, but not by much. One will just need to take it into account when working on lighting.

CDRZiltoid said:
Only downside will be that these aabb volumes will need to be added manually further complicating the level design process

Your game looks like you might use BSP trees for the levels? If so, the convex leaf cells could be used. Quake II RTX for example still does this to select lights.

CDRZiltoid said:
so that the distance calculations would be limited to the “giProbes” within the “giVolume” that the dynamic object was currently within.

If you would use a global grid for the probes without any knowledge about rooms or walls, this problem can be addressed by adding a depth buffer to the probes, which then acts like a shadow map to cull occluded probes.
RTX GI does this for example. Surely worth to read anyway: https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9900-irradiance-fields-rtx-diffuse-global-illumination-for-local-and-cloud-graphics.pdf

JoeJ said:

Your game looks like you might use BSP trees for the levels?

Currently I am not. I'm loading the meshes in and drawing each one (with some simple batching of unique meshes). Then for collision I create individual mesh objects within bullet physics and use bullet's collision functionality to deduce what the character controller needs to collide with (and resolve the collisions in a kinematic fashion).

JoeJ said:
RTX GI does this for example. Surely worth to read anyway: https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9900-irradiance-fields-rtx-diffuse-global-illumination-for-local-and-cloud-graphics.pdf

Appreciate the link. I'll give it a read ?

This topic is closed to new replies.

Advertisement