Brace yourself, Shader Model 6.0 is coming

Started by
20 comments, last by Dingleberry 7 years, 9 months ago

Why is HDR support coming next year, but I bought an HDR screen last year? Somebody dropped the ball.

I'm guessing that 12bit REC2020 compliance is coming soon. D3D9 can do 10bit REC2020, so that part isn't a problem :)
Advertisement

Will the XBOX One SDK get SM 6.0 too? I know it has some additional HLSL intrinsics, but nothing like that.

Not for me to confirm or deny I'm afraid.

I'm guessing that 12bit REC2020 compliance is coming soon. D3D9 can do 10bit REC2020, so that part isn't a problem smile.png

I wasn't aware of that. Do you have a source?

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

I'm guessing that 12bit REC2020 compliance is coming soon. D3D9 can do 10bit REC2020, so that part isn't a problem smile.png

I wasn't aware of that. Do you have a source?

I've seen it on MSDN somewhere, but can't find the link atm.

AFAIK you just ask for a 10 or 16 bit backbuffer, instead of an 8 bit one, which will succeed if your driver/GPU/OS supports 10+ bit video out.

The user might have to tick some boxes in their driver to enable "deep color", "high color" or "extended color" or some such too, I'm not sure, as I've only used 8 bit monitors!

For details on what windows says that drivers must do in these situations, check out the"High Color" and "Extended Color Format Support" sections in:

http://download.microsoft.com/download/7/E/7/7E7662CF-CBEA-470B-A97E-CE7CE0D98DC2/GraphicsGuideWin7.docx

There's more pieces to the puzzle than just a 10 bit signal though, that's something Rec709 supports after all. I don't imagine any of those boxes in the driver options have anything to do with Rec2020.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

There's more pieces to the puzzle than just a 10 bit signal though, that's something Rec709 supports after all. I don't imagine any of those boxes in the driver options have anything to do with Rec2020.

Ah you're right. The 10 bit transfer function is the same between 709/2020 so I thought they were compatible, but the primaries and the nominal range are still different.
So yeah, this news of "HDR monitor support", probably means "Rec2020 support".


When they said procedural textures I was thinking something else, but this does look interesting none-the-less.

If I understood correctly, this allows 'focusing' detail to specific regions of screen?

With eye tracking, this would probably allow significant performance wins as you can use low quality everywhere the player isnt looking. Maybe VR devices will one day get built in eye tracker...

Or is it about having textures with differing detail at different regions?

o3o

What I want to see... I'll call 'sampler shaders'. Instead of using a normal sampler, you could create and use a 'Sampler Shader'. When the GPU gets to the point in a shader where it needs to sample a texture, it would stop (conceptually only) and execute the samper shader. The sampler shader would be executed over a range (say a 16x16 or 32x32 block, whatever the driver/hardware thinks is necessary) to produce unfiltered texels which would be stored in a cache (type, size, policies, etc... would be driver handled). The shader performing the sampling would then be able to read from the cache, perform whatever filtering is necessary, and continue on. The sampler shader would have a minimum blocking size (much like a compute shader) and access to on-chip shared memory. This way complex procedural texture data could be created and used in real-time, dynamically adjusting its resolution/quality based on its usage in the scene. Not only procedural texturing but higher quality texture compression (wavelet, VQ, fractal, whatever), would be rather trivial...

It wouldn't be straightforward, but you could build a system like this on top of tiled resources and compute shaders.
It would be similar to SVT ("megatexturing") tech that's built on top of tiled resources. You'd collect up a list of cache misses (unfortunately, after the pixel shader has run), allocate tiles to store the data into, then launch a compute shader to fill that tile with data.
You could avoid the one-frame delay by using deferred texturing -- instead of writing attribtues into the gbuffer, instead you write UV coordinates and generate a separate list of cache misses (when a UV that you've written isn't backed by a tile). You'd then run the compute jobs to fill in all the necessary tiles, and then run a post-process to convert your uv-buffer into a real g-buffer.

I wonder if they can fit sm6 on dx12 feature level hardware. The gpu ecosystem for dx12 is quite segmented today with fl12_1 gpu not supporting resource binding tier 3 or stencil shader value or even async compute.

"async compute" (as "async copy") has quite few to do with the shader model, same as resource binding tier (except for tier1).

What is really fragmented on NV and AMD hardware today (and pre-haswell intel iGPUs) are shader semantics of CR, RoVs and StencilRef. GCN Gen 1 and Gen 2 also do not support FP16 minimal precision like NV hardware.

So there will be another well defined set of caps-bits for shader intrinsics and semantics, or a complete new FL... But as for the "language" feature most of them should work with all hardware, not sure about every feature, but I can guess a mirror to OCL 1.2 or OCL 2.0 requirements for most of those language features (I would bet more on the second since they mentioned FL 12_0+ hardware only... for now, dunno how much IHVs will care about FL11/11.1+ HW on 2017...)

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

PDF: http://1drv.ms/1T8iew9

video:

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

This topic is closed to new replies.

Advertisement