Advertisement

Compressed DXT textures and mipmaps

Started by December 07, 2011 03:43 PM
5 comments, last by maxest 13 years, 1 month ago
I've been wondering how actually compressed textures work with mipmaps. As far as I know a texture, to be compressed, must be a multiple of 4 in width and height. So, does that mean that if I create a compressed 1024x256 texture, with a full mip chain, will I get these sizes:

1024 - 256
512 - 128
256 - 64
128 - 32
64 - 16
32 - 8
16 - 4
8 - 4
4 - 4
4 - 4
4 - 4

? The motivation for so many 4-4 levels is that if the texture is not compressed, the full mip chain in that case would be:

1024 - 256
512 - 128
256 - 64
128 - 32
64 - 16
32 - 8
16 - 4
8 - 2
4 - 1
2 - 1
1 - 1

The code I have written so far does seem to confirm my guess about the size of the bottom-most mip levels, but I would like to make sure that's the way how they work.
Technically it is not possible to DXT-compress anything that is not a multiple of 4, because DXT is a block compression sheme that takes a 4x4 block of input pixels, and chooses some "good" combination of two colors and an interpolation.

However, it is legal to not use all pixels in a block, and it is even legal to reuse data within the buffer object feeding the texture for smaller MIP levels. It's a bit hard to tell exactly using your naked eye where the data for each level comes from :-)
Advertisement
Well, I actually wish I knew where to take the data from cause I'm working on my own image format and I would like to be able to store both compressed and uncompressed data, so... would be nice to know what happens with mip levels that have width or height below 4 ;)
The best strategy would really be to use a standard DDS compressor (such as nv texture tool) and use whatever is at the respective position that the header points to as data. You can mangle that into your own format if you like, but chances are you can't significantly outperform (maybe by a few bytes, if lucky) what the existing tool already does.

In theory, with some luck, the binary data for the 4 blocks forming the 16x16 mipmap could occur earlier in some bigger mip level, and e.g. the 1x1 and 2x2 mipmap could take their data from the same 4x4 DXT block. The people writing tools like nv texture tools have spent months figuring out those details, and it would be unwise not to use their knowledge. I treat the whole DXT stuff as one "blob" in my loader and give whatever they tell me as source for the mip levels.

Well, I actually wish I knew where to take the data from cause I'm working on my own image format and I would like to be able to store both compressed and uncompressed data, so... would be nice to know what happens with mip levels that have width or height below 4 ;)
If you have a mip level that would be e.g. 2x2, then the surface is officially 2x2 (If you call GetLevelDesc(...) for that level, you'll get a width and height of 2 returned), however it'll require a full DXT block(s), so it'll take 8 bytes (DXT1) or 16 (DXT2-5).

So to answer your question in another way: The mip levels do get smaller, but anything <= 4x4 pixels still takes up a whole block worth of memory. And if you had a mip that would be 8x1, it'll be two DXT blocks in width and one in height.
Okay. I think I get it. My code seems to be working fine with the assumption that none of the mip levels has width or height < 4. It works because width and height are only used to compute the total size the mip level takes, so given your explanation, Evil Steven, it seems to yield correct results. Thanks

@samoth: well, I actually don't feel like using any external tool. I want my engine to generate proper "cached" data of the texture so that they can be qucikly loaded at runtime. But I guess NV provides some library to use. I'll probably check it out.
Advertisement
Using the standard D3DX functions to load a .dds file should give you good performance, and saves writing a whole bunch of texture loading code. The DirectX SDK also comes with a texture tool that you can use to make .dds files. There's also Squish if you want to make your own texture conversion tool.
If you have any interest in making your own compressor, this thread may be of interest to you.
I have made substantial improvements to the algorithm and made huge performance improvements using threads.

Still not quite the same quality as ATI on average, but I still have a few more tricks up my sleeve I will save for the weekend, and I think I can get it to meet, beat, or come close to ATI.
In any case, this algorithm beats NVidia in all cases, and I believe NVidia is just a modification or enhancement of Squish, so it can be expected to provide higher quality than Squish also.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Thanks for pointing these libs out. I knew about squish before and I was actually planning on using it in the new iteration of my engine. Though in the current iteration I will stick to DX's internal compression. My renderer supports both D3D9 and OGL so I compared how good at compression both APIs are. It turns out that compression made by D3D9 (by this I simply mean creating a texture from file in a DXT format and simply retrieving the data with LockRect) is faaaaar better than the one done by OpenGL (glTexImage2D with passing appropriate DXT internal format). So currently my engine, if uses DX, compresses textures and saves them in my custom format, so that OGL can quickly read that; getting the aformentioned better quality.

This topic is closed to new replies.

Advertisement