Normal map mip generation

Started by
7 comments, last by Aressera 1 year ago

When generating mip maps for normal maps, is it common practice to re-normalize the normal vectors for the mips after downsampling?

Downsampling may cause the normals to have a length other than 1 which can affect the shaded result for distant geometry. Typically, distant mips produce a darker final color (i.e. length < 1) if they are not re-normalized.

The main reason why this matters is because I am using 2-channel normal maps stored in BC5 compression format (this seems to be the best format for storing compressed normals). This format assumes that normals have length 1 so that it can reconstruct the Z component in the pixel shader. If the input normal map (before dropping Z component) is not unit length, then the results can look wrong.

If I make sure the source image is normalized, this helps, but I still notice some difference in the shading for distant geometry when toggling 2-component normals on and off.

XYZ normals
XY normals

If you toggle between the images you can see how the XY normals are less dark than the XYZ normals for far away mips, because they enforce that the normals are always unit length. I like the XY result better, but it seems weird that it would be so different.

Does anyone have insight into the best way to handle mip map generation for normal maps?

Advertisement

Normalizing in the shader is probably the easyest way to deal with this.

Keeping in mind that even with full 3-component normals stored in a normal map, if you're not doing point sampling (meaning, you have a linear sampler) then those values are being interpolated between samples, so that result is not likely to be truly normalized even in that case. That may not be much of an issue in (texture) areas where normals aren't varying sharply, but depending on your algorithm it could introduce artifacts. So if you're doing linear sampling of normal maps I would suggest normalizing the sample in your shader (I do).

I am already normalizing the normals in the shader (this is required with triplanar normal mapping). I'm more asking about generating the mip maps during the asset build process, and how this interacts with 2-component XY normals. Since XY normals require the input XYZ normals to be unit length for correct results, it seems like the XYZ normals should be normalized after downsampling, but before dropping the Z component.

I've written a tool myself that lets you open/compress/save textures (also callable in headless state from command line). The tool in window mode looks like this:

Fig. 01 - Normal map uncompressed in the left viewport, compressed with BC1 in the right viewport

Or this:

Fig. 02 - Normal map uncompressed in the left viewport, compressed with BC5 (NM) in the right viewport

So - in short - the tool allows you to load an image, compress it (allowing you to set some parameters - like which compression you want to use, whether you want to generate mipmaps, and mipmap generation flags (alpha tested and normals - at least currently)). What does it really do in terms of loading and compression?

Loading image

While it might sound straight forward - it is really a tiny bit more complex than one thinks. Loading standard formats is obvious (JPEG, PNG, BMP, etc.) - they contain single RGB or RGBA layer, which is loaded and stored as bottom-level image in mipmap pyramid. Loading an already compressed image (like DDS storing DXT BCn data) or a custom engine format (I called it SCTEX - with BCn compressed data), in general those are loaded to both - right side (which is already compressed) and left side (where you have to uncompress the texture - so you can work with it).

Now here is a deal - when loading BC5 (storing just 2 channels) - how do you know whether it is just 2-channel gradient or really a normal map? Well… in custom format like SCTEX you know, I think DDS can store that given compression is DXT BC5NM (which really is BC5, but specifies that it stores normal map), but generally? You don't. For normal map - I uncompress both channels and reconstruct 3rd in case of normal map.

Mipmap Generation

Mipmap generation is straight forward - it is ALWAYS done on uncompressed texture (which is then compressed per-level in mipmap pyramid). Devil is in the details - or rather flags.

Normal map generation means that each resulting pixel in every level of mipmap pyramid is re-normalized after computation.

Alpha (testing) generation calculates percentage of coverage on bottom-level of mipmap pyramid. Each generated upper level then multiplies the resulting alpha to get as close as possible to original coverage. This is done to prevent issues of artifact (bug) like disappearing leaves/grass at distance.

Conclusion

Keep in mind that this tool is made for a custom in-house engine for specific purpose - which were mipmap generations for normal maps and with support of alpha coverage, plus compression into BCn formats to decrease memory footprint of textures in applications. Hopefully this short description of how I approached similar issue will help at least a little bit.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Vilem Otte said:
Normal map generation means that each resulting pixel in every level of mipmap pyramid is re-normalized after computation.

Ok this is what I was looking for. It seems like the right thing to do, but I hadn't seen it mentioned anywhere.

Interesting note about the alpha testing, I'll keep that in mind when dealing with those kinds of textures.

Aressera said:
Ok this is what I was looking for. It seems like the right thing to do, but I hadn't seen it mentioned anywhere.

https://developer.download.nvidia.com/whitepapers/2006/Mipmapping_Normal_Maps.pdf​
Second entry when googling “normal map mip generation”.

This even goes one step further and explores and algorithm that tries to fix some additional artifacts that my arise from mipmapping, even with renormalization.

@Juliean Funny thing is I have had this PDF sitting on my desktop since before I started this thread. From skimming, it doesn't answer my main question, it's more talking about their lookup table technique (done in shader) for avoiding specular aliasing. The abstract:

The result of averaging or interpolating unit normals is less than unit length unless all normals within the footprint happen to be identical. Most algorithms simply renormalize, but this paper explores how the shortening can be used as a measure of normal variation to eliminate the common problem of strobing/sparkling noise due to aliasing of specular highlights.

This topic is closed to new replies.

Advertisement