Texture data flow from CPU to GPU

Started by
12 comments, last by Valakor 1 year ago

Nagle said:
What systems actually do that kind of swapping in and out of GPU memory

all modern gpus doing it with opengl, where its up to the driver/hardware to manage the mip map levels for you. this applies on opengl, ask someone else to elaborate about this with other apis.

(by using the proper extensions, opengl can also generate the mip map levels for you automatically as well when uploading the texture, lifting one extra burden down from your shoulder. )

does it = doesnt means its going to be a good experience. if you got interested into relying on this behavior of opengl then i would strongly disencourage you from doing so, because with a lot of drivers and gpus this is a disaster once you run out from vram.

Advertisement

Another approach, that you can try if your 3D API allows it, to have some form of buffer( and by extension directly/indirectly texture memory) persistently mapped into client address space. This will provide a ‘direct’ path to uploading your texture data to the GPU. However, as before, you have to be careful and provide some form of synchronization to ensure that the texture is not getting used by the GPU ( for draw on any other purpose) before the data is fully uploaded. Ex. in the case of OpenGL, you can create a few pixel buffer objects, map the buffer stores persistently and then use these as the source for your glTex(ture)SubImage* calls. The downside to this, is you'll use a little more GPU memory as the pixel buffer(s) themselves require memory allocation. The ‘upside’ is you can re-use the pixel buffers to upload texture data for any number of textures, either via partial or full size uploads.

I don't think you should optimize for device-lost errors - they should be incredibly rare in the overall running time of your application. I think it's likely acceptable that games (not applications in general) just crash because you should do your best to avoid device lost in the first place. If you find that unacceptable, then I don't think you need to implement a “nice” or “smooth” recovery mechanism: the application is already in a ~bad state (probably going to hitch at minimum). In my experience device lost errors happen in roughly two scenarios:

  1. A bug in the application (actual programming bugs, or soft bugs like not respecting memory budgets or not checking available extensions)
  2. A bug in the graphics driver (not much you can do about this, and the program may likely it hit again even if you recover)

If you absolutely feel that you must gracefully handle device lost then a common approach would be to re-queue all texture (and buffer, etc.) uploads. You can of course implement this in a variety of ways ranging from simple and hitchy to complex and smooth depending on how your resource upload code works. For example, I think all modern engines with high texture memory requirements probably need some form of texture streaming system. If you already have this, then you can get away with re-uploading just the lowest-res mips for all textures upon device loss (which should get things working again ~immediately) and “catch up” on the rest over the following frames (based on texture upload bandwidth constraints). You could keep all your textures mapped in CPU memory just in case of device-lost, but then you're paying for all that overhead 100% of the time for a situation that hopefully arises very rarely.

This topic is closed to new replies.

Advertisement