Large Texture or Smaller TextureSS
Hi, would having 1 large 512*512 texture ( which is divided up and then the mapping coords used to get correct icon) be better than having 200 32*32 textures?
How did opengl handle this in memory and which is better?
----------------------------http://djoubert.co.uk
One problem you might get when putting multiple textures in one texture is that texels next to the edge texels in your small subtextures will bleed when using any kind of filtering such as bilinear or trilinear.
To make it is hell. To fail is divine.
Horses for courses mate, personally I would go for the single large texture as it would be easier to work on when you're creating your individual 32 x 32 images on it. Be aware that some old cards won't handle anything bigger than 256 x 256 properly; compressing them horribly usually.
--
Cheers,
Darren Clark
Cheers,
Darren Clark
September 18, 2005 06:29 PM
That depends on your needs...
If you'Ve got a whole lot of textures to load and unload at run time,
then you might prefer keeping small textures in video ram and exchanging no longer needed with now needed ones - given the fact that you've got an effective algorithm to determine the need of having a texture in ram and you've got enough video ram to make sure that in every case you could be faced with you won't run out of memory.
On the other hand, if you have a defined set of textures needed in a scene that won't change at run time, you would prefer loading everything in one large texture to a) minimize loading and conversion time required, b) don't have to hassle with too much costly operating system calls for file retrieval and c) rely on your graphics card (driver) to (hopefully) boost texture mapping performance as one texture usually requires the card less overhead than many textures, which will have to be managed.
So, to be absolutely sure, try both and profile each method, giving you right the results you'll want to have, plus the ability to determine the solution of choice on a running system by using a simple test sequence at program start.
sebastian porstendorfer
(not yet registered, so don't object to me being 'anonymous')
If you'Ve got a whole lot of textures to load and unload at run time,
then you might prefer keeping small textures in video ram and exchanging no longer needed with now needed ones - given the fact that you've got an effective algorithm to determine the need of having a texture in ram and you've got enough video ram to make sure that in every case you could be faced with you won't run out of memory.
On the other hand, if you have a defined set of textures needed in a scene that won't change at run time, you would prefer loading everything in one large texture to a) minimize loading and conversion time required, b) don't have to hassle with too much costly operating system calls for file retrieval and c) rely on your graphics card (driver) to (hopefully) boost texture mapping performance as one texture usually requires the card less overhead than many textures, which will have to be managed.
So, to be absolutely sure, try both and profile each method, giving you right the results you'll want to have, plus the ability to determine the solution of choice on a running system by using a simple test sequence at program start.
sebastian porstendorfer
(not yet registered, so don't object to me being 'anonymous')
This problem has been tacked various times at the last two GDCs, going under the name of "texture atlases".
Most of the time, this is embedded in performance and batching talk.
Boiling down everything by simple means, performance for a single big texture shall help performance by the means of better batching. Although batching is not a problem in GL like it is in D3D, bigger batches still help a lot.
By sure means, performance will be a better with the big texture, altough your mileage may vary. I would say it would also give you a better texture memory usage (probably just a minor improvement).
As said, problems may rise up when mipmapping and filtering comes in. Last GDC, nVIDIA proposed to use a fragment program to filter the texcoords to [0..1-dx] and to map those coordinates on the right sub-texture.
Mipmapping is a bad trouble with NPOTD textures but it's not so difficult to build them so mipmaps are good.
What they didn't tell on the presentation is the actual need for better management. This is a really complicated story. I believe you shall put this on the TODO list but don't begin scratching your head now unless you are having very bad performance.
Example: you cannot assume all the textures will be in a single atlas. What textures shall be put in what atlas? Some kind of visibility determination may help. What happens if you need too much atlases anyway? How much wasted memory do you want to trade for extra performance in case texture does not fit in the atlas? Shall an atlas have textures of arbitrary sizes or shall them all be the same? What if two atlases have a great amount of common sub-textures? Split or allow duplicates (in the first case you may run out of texture samplers, in the latter, massive memory waste may happen)?...
What I mean is that it's really complicated to make it work nicely. I suggest you to spend your time elsewhere more important.
Most of the time, this is embedded in performance and batching talk.
Boiling down everything by simple means, performance for a single big texture shall help performance by the means of better batching. Although batching is not a problem in GL like it is in D3D, bigger batches still help a lot.
By sure means, performance will be a better with the big texture, altough your mileage may vary. I would say it would also give you a better texture memory usage (probably just a minor improvement).
As said, problems may rise up when mipmapping and filtering comes in. Last GDC, nVIDIA proposed to use a fragment program to filter the texcoords to [0..1-dx] and to map those coordinates on the right sub-texture.
Mipmapping is a bad trouble with NPOTD textures but it's not so difficult to build them so mipmaps are good.
What they didn't tell on the presentation is the actual need for better management. This is a really complicated story. I believe you shall put this on the TODO list but don't begin scratching your head now unless you are having very bad performance.
Example: you cannot assume all the textures will be in a single atlas. What textures shall be put in what atlas? Some kind of visibility determination may help. What happens if you need too much atlases anyway? How much wasted memory do you want to trade for extra performance in case texture does not fit in the atlas? Shall an atlas have textures of arbitrary sizes or shall them all be the same? What if two atlases have a great amount of common sub-textures? Split or allow duplicates (in the first case you may run out of texture samplers, in the latter, massive memory waste may happen)?...
What I mean is that it's really complicated to make it work nicely. I suggest you to spend your time elsewhere more important.
Previously "Krohm"
My advice is simple, use the big 512x512 images or perhaps a few 256x256 ones depending on your need.
If you keep your icons from producing textels smaller than pixels you won't need any mipmaping and you won't get any mipmapping problems.
The "better management" Krohm hinted is actuarly called viritual texturing (acording to john carmack atleast), it can more or less be done on todays hardware and basicly alows shaders(both vertex and fragment) to access the texture and geometry memory without any restrictions.
It will basicly act like you have all your textures baked in one huge texture, but without the nasty mipmap side effects of doing that.
This will truly free game creators in the same way as programable shaders has.
If you keep your icons from producing textels smaller than pixels you won't need any mipmaping and you won't get any mipmapping problems.
The "better management" Krohm hinted is actuarly called viritual texturing (acording to john carmack atleast), it can more or less be done on todays hardware and basicly alows shaders(both vertex and fragment) to access the texture and geometry memory without any restrictions.
It will basicly act like you have all your textures baked in one huge texture, but without the nasty mipmap side effects of doing that.
This will truly free game creators in the same way as programable shaders has.
www.flashbang.se | www.thegeekstate.com | nehe.gamedev.net | glAux fix for lesson 6 | [twitter]thegeekstate[/twitter]
Quote: Original post by lc_overlord
The "better management" Krohm hinted is actuarly called viritual texturing (acording to john carmack atleast), it can more or less be done on todays hardware and basicly alows shaders(both vertex and fragment) to access the texture and geometry memory without any restrictions.
Sorry, I shouldn't have overlooked this. This is not what I meant to say. The "better management" written about is the need to exploit data (texture) locality. While taking the textures and putting them in a big atlas may work, it could also be sub-optimal in various cases (see the "example problems").
This won't be hardware accelerated for a quite long amount of time by sure means because it needs a much higher abstraction than any 3D API/HW allows.
Playing with the data structures may be necessary.
As for the fragment shaders, I believe you've actually got it.
Previously "Krohm"
Perhaps i should clarify what i just said about viritual texturing.
Viritual texturing does not merge all texture into one big one, instead it allows shaders to use DMA to read(or write) texture infromation directly from the texture memory.
The big advantage is that you do not have to change render state when switching textures.
the big disadvantage is probobly that you would loose all types of texture filtering (allthough i would persume that there will exist some kind of middle ground functions that will solve this), so the shaders would have to do that manualy, but in doing that you could make better and more optimal texture filtering methods.
Either way, you should listen to john carmacks last qcon keynote speach, he explains this better than i can.
Viritual texturing does not merge all texture into one big one, instead it allows shaders to use DMA to read(or write) texture infromation directly from the texture memory.
The big advantage is that you do not have to change render state when switching textures.
the big disadvantage is probobly that you would loose all types of texture filtering (allthough i would persume that there will exist some kind of middle ground functions that will solve this), so the shaders would have to do that manualy, but in doing that you could make better and more optimal texture filtering methods.
Either way, you should listen to john carmacks last qcon keynote speach, he explains this better than i can.
www.flashbang.se | www.thegeekstate.com | nehe.gamedev.net | glAux fix for lesson 6 | [twitter]thegeekstate[/twitter]
I understand only now we were speaking about two different things. ;)
I think I've heard of that. I must say I'm not really good at this but from the little I've heard they look very promising.
I'm taking your hint to look at the note. Thank you for pointing this out.
Quote: Original post by lc_overlord
...Viritual texturing does not merge all texture into one big one, instead it allows shaders to use DMA to read(or write) texture infromation directly from the texture memory.
I think I've heard of that. I must say I'm not really good at this but from the little I've heard they look very promising.
I'm taking your hint to look at the note. Thank you for pointing this out.
Previously "Krohm"
thanks alot guys that more info than i can chew.....
Thanks alot guys (for a time i thought the forums were dead but that was only my mistake ;-)
Regards Dawid Joubert
My Conclusion is:
For Icons and stuff like that (which are 32x32 or 64x64) it is better to put it into a 512 (or if ur going for old machines 256) sized texture and have a type of sprite class for handelling it.... Then for regular texture like in q3 just create a single texture
Thanks alot guys (for a time i thought the forums were dead but that was only my mistake ;-)
Regards Dawid Joubert
My Conclusion is:
For Icons and stuff like that (which are 32x32 or 64x64) it is better to put it into a 512 (or if ur going for old machines 256) sized texture and have a type of sprite class for handelling it.... Then for regular texture like in q3 just create a single texture
----------------------------http://djoubert.co.uk
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement