UV Mapping: To Squeeze Down or Not, and How Big is Too Big

Started by
8 comments, last by Thaumaturge 3 years ago

I'm creating a model that's intended for others to use. Right now, I'm at the UV-mapping phase, and I find myself uncertain:

The model in question has a fair few elements, and even with mirroring halving the UV-space being used, the various UV-maps seem to take up a fair bit of space.

Now, I could just arrange them as-is into one UV-map and call it a day. However, that means either having a rather large texture-map, or having blurry textures.

A large texture-map would be a simple solution--but how large is too large? Would a 4096x4096 texture-map be too much to dedicate to a single character…?

Conversely, I could adjust the UV-maps to minimise space--squashing areas with little detail--and have sections of similar colour overlap. But doing either might incur consequences should someone want to use the UV-coordinates for something other than the textures that I provide.

If the model were for my own use, there would likely be little trouble here: I would likely have some idea of how it's to be used. But as I intend to offer it to others, I'm not quite sure of what's wise here.

So I'm thus uncertain, and come here to ask for advice on the matter!

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Advertisement

If you are targeting PC, a single 4096x4096 texture is likely the best way to go, unless you are trying to aim at really low end. I'd also consider if you are targeting modern PC with a normal/higher tri count(as opposed to a low poly style), you are likely/possibly going to want multiple maps anyway to support the modern PBR materials/shaders that are generally in use for these. It is far from uncommon to have several 4096x4096 maps for your “hero” models these days.



The question, what is "too large" really comes down to what target platform you're aiming for. Many GPUs today have more than 8GB of VRAM. Artists generated textures are usually stored compressed with block compression. Let's say the engine you using are built around DirectX and you use BC7 compression, on a 8GB machine you can then fit 500 such textures into the video memory, which is probably a lot more than you need. What makes the situation even more complex is that many engines support dynamic streaming of the resources so based on the distance to the character and how much else are streamed in at that time the full texture might not be in memory, just the MIPs currently needed.

To summarise it's a kinda “how long is a string” question and the answer is “it depends”. Best thing is to familiarise yourself with the memory consumption debug capabilities provided with the engine you're using and then check if you're over the budget on the target platform.

Thaumaturge said:
Conversely, I could adjust the UV-maps to minimise space--squashing areas with little detail--and have sections of similar colour overlap. But doing either might incur consequences should someone want to use the UV-coordinates for something other than the textures that I provide.

Nah, i'd say it's better you make a good UV map with no overlaps, low stretching, and uniform texel size ratio. Then it's easy to change / modify textures afterwards. It's also easy for the client to do such optimizations / compromises himself if necessary, but it's hard to fix it the other way around.

Ah, thank you all! ^_^

Your answers set my mind somewhat at ease: “One big texture” + “unwarped UVs” is rather an easier route, I feel, than what I was contemplating.

Regarding what I'm targeting: I'm not, really.

To explain: What I'm making is a sample-model for an engine that I use; something to give devs coming to the engine something to start out with. A stand-in character. A bit like the Unreal Engine “android”, but a bit fancier. As a result, I don't have a single target spec; just: “Something that people are likely to be able to use.”

(Although it's possible that it'll also see use in a showcase-project that the community is working on.)

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Thaumaturge said:
Regarding what I'm targeting: I'm not, really.

It always works to scale down from 4K → 2K. To get something in between, you'd need a second UV set using 4K x 2K non square texture. But i don't know if that's technically still POT, or if it has performance penalty on (some) GPUs? If it makes sense, DCC usually support to resample texture from one UV set to another, so you'd only need to rearrange UV charts once.

As you also mentioned using mirrored texture to save space, does this still work seamless with normal mapping? Even if chats touch each other?

JoeJ said:
It always works to scale down from 4K → 2K.

That's a good point, if I understand you correctly.

JoeJ said:
To get something in between, you'd need a second UV set using 4K x 2K non square texture. …

I'm not sure of quite what you're suggesting here--why would I want something in-between? Or a second set of UVs?

JoeJ said:
As you also mentioned using mirrored texture to save space, does this still work seamless with normal mapping? Even if chats touch each other?

That is something that I'm concerned about, to be honest. I'm hoping that it won't be a problem due to the places in which normal-maps meet are likely to be pretty much flat anyway.

However, at worst it's only likely to be a problem for the torso- and head- elements; the mirroring of the arms and legs should, I think, be fine.

And if the mirroring doesn't work out, I can simply rework the UV-map to be non-mirrored. I'd rather try it and fail than not take a shot at the optimisation.

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Thaumaturge said:
I'm not sure of quite what you're suggesting here--why would I want something in-between? Or a second set of UVs?

It's meant to increase texture memory requirement options (because downscaling only allows jumps of ¼):

Model variant using UV set 1: 4K x 4K → 2K x 2K → 1K x 1K…

using UV set 2: 4K x 2K → 2K x 1K…

So the second set would fill the gaps. But i don't think that's ever needed.

JoeJ said:
It's meant to increase texture memory requirement options (because downscaling only allows jumps of ¼):

Ah, I see! Thank you for the explanation!

Yeah, I imagine now that most machines will handle a 4096 texture, and that those that won't will likely be fine enough with a 2048 one.

Going back then to this:

JoeJ said:
But i don't know if that's technically still POT, or if it has performance penalty on (some) GPUs?

My understanding--and I do stand to be corrected--is that textures are pretty much fine as long as both of their dimensions are power-of-two, even if those dimensions aren't equal.

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

This topic is closed to new replies.

Advertisement