Advertisement

Depth-stencil buffer textures only supported on NVIDIA cards?

Started by November 09, 2007 09:29 AM
4 comments, last by ET3D 16 years, 10 months ago
Hi. I've been struggling to getting some code to work. Basically what I want todo is create a depth-stencil buffer texture, assign it to the device and render:

m_pDevice->CreateTexture(desc.iWidth, desc.iHeight, desc.iMipLevels, D3DUSAGE_DEPTHSTENCIL, D3DFMT_D24S8, D3DPOOL_DEFAULT, &m_pTexture, 0);

...

IDirect3DSurface9 *pDSSurface;
pD3DTexture->GetSurfaceLevel(0, &pDSSurface);
m_pDevice->SetDepthStencilSurface(pDSSurface);
Unfortunately CreateTexture() fails with a D3DERR_INVALIDCALL. After some searching around on the net I found someone saying that this only works on NVIDIA cards! Is this true? If that's the case, what would be the best way to work around this? My first thought was using CreateDepthStencilSurface() and create a texture with the same size. Then when ever I need to use the texture I fetch the surface of the texture and use D3DXLoadSurfaceFromSurface() to copy the depth-stencil buffer over to the texture.. Thought this sounds really stupid ;) Is there no better way?
ATI cards have a FOURCC format that allows you to use depth textures. If you have an ATI card, that's the way to go. Keep in mind the two formats sample differently in the shader, so be sure to read up on what the returned value is from the Tex2D instruction.
Sirob Yes.» - status: Work-O-Rama.
Advertisement
Hi and thanks for the answer. Tried using MAKEFOURCC('D', 'F', '2', '4') and that worked :)
Unfortunately as you said this is a ATI specific "feature" (why didn't they just allow the normal depth-stencil formats?) and considering the format.. does this mean that I would lose the stencil buffer whenever I use this as my depth-stencil buffer?

So with this in mind it feels like I might as well could create an extra RT in my MRT for Z or assign it to some unused channel in the other RTs.
Quote: Original post by johanderson
this is a ATI specific "feature" (why didn't they just allow the normal depth-stencil formats?)
It's a good question and one of the primary motivations behind the "cap-less" architecture of Direct3D 10. Not only does the D3D9 API allow for capability bits but the various IHV's still chose to implement things differently and hack the detection/activation in on top of it [rolleyes].

Quote: Original post by johanderson
and considering the format.. does this mean that I would lose the stencil buffer whenever I use this as my depth-stencil buffer?
I've not tried it, but that would be my interpretation.

Quote: Original post by johanderson
So with this in mind it feels like I might as well could create an extra RT in my MRT for Z or assign it to some unused channel in the other RTs.
Ultimately it's just an optimization - chances are this format hooks up to the highly specialised and highly optimal HiZ/Early-Z hardware ATI have. You'd need to read through the ATI/AMD SDK whitepapers for more information...

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote: Original post by johanderson
So with this in mind it feels like I might as well could create an extra RT in my MRT for Z or assign it to some unused channel in the other RTs.

This might well be the easiest solution to implement, and if you feel good about doing that, by all means, go for it. Using NVidia and ATI specific optimizations is great, and can provide very nice performance increases in some situations, but shouldn't be binding.
Sirob Yes.» - status: Work-O-Rama.
Quote: Original post by johanderson
Unfortunately as you said this is a ATI specific "feature" (why didn't they just allow the normal depth-stencil formats?)

Note that NVIDIA doesn't really allow normal depth-stencil formats for textures, either. Sure, they piggy-back on the standard formats, but it's not as if you're able to read the depth in the shader.

It all stems from D3D9 not providing this as a real option, forcing each company to provides its own extension. IMO, ATI did the cleaner thing, since NVIDIA's solution is misleading (you'd expect to read depth values but won't get them).

This topic is closed to new replies.

Advertisement