How does radiosity normal mapping work?

Started by
4 comments, last by lukesmith123 12 years, 9 months ago
Ive read the ati document on half life 2's radiosity normal mapping (http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf)
and I just wanted to make sure I had understood how this works before I attempt to implement it.

I understand that half life uses RGB colormaps rather than black and white for the lightmaps.

So in the shader when rendering a face we multiply each lightmap colour by the dotproduct of each normalmap color and the normal of the face and add each color to form the radiosity normal map and then multiply this by the albedo.

Is that how it works?


On another note I often see people say on the subject of using BSP for level geometry that it isnt the best method these days. I was just wondering what the alternatives are considering we can use BSP to render the level geometry but also to test collisions against it very easily?

thanks,
Advertisement
Basically for each texel in the lightmap, you have 3 lighting values. These lighting values are the diffuse lighting compute in 3 directions in the tangent space of the surface, which basically means they are oriented about a plane located at that texel. This is visualized on page 10 of that presentation you linked, where the 3 arrows represent those directions on the surface and the numbers represent the XYZ components of the direction vectors in tangent space. At rendering time, you basically just take your tangent-space normal from the normal map, dot it with each of those direction vectors, and multiply that dot product with the corresponding lighting value for that direction (or alternatively, you can transform the direction and normal vectors to world space or view space if you prefer to do it that way).
Oh ok I understand thanks.

So using the basis for the vectors in page 10 presumably I could create my own radiosity lightmaps in this style from a greyscale lightmap?

could this be achieved by multiplying the vectors for each component in page 10 by the normal of the face and multiplying it by the greyscale map?
No, you can't compute it from a standard lightmap. A standard lightmap only has lighting information for one direction: the normal at that texel. For Valve's approach you need the lighting in the 3 basis directions. You can only compute this in the lightmap baker itself.
Oh I see, well thanks for the info:)
Sorry quick question. I decided to start off with greyscale lightmaps and greyscale bumpmaps to make things a little easier at first.

In this case would I do Albedo * lightmap * dot(bumpmap,normal);

The only reason I ask is that I couldnt really tell the difference between doing that or just

Albedo * lightmap * bumpmap;

This topic is closed to new replies.

Advertisement