Advertisement

skinning with VertexShader Streamout => point topology

Started by January 22, 2018 01:35 PM
2 comments, last by MJP 7 years ago

hi,

after implementing skinning with compute shader i want to implement skinning with VertexShader Streamout method to compare performance.

The following Thread is a discussion about it.

Here's the recommended setup:

  • Use a pass-through geometry shader (point->point), setup the streamout and set topology to point list.
  • Draw the whole buffer with context->Draw(). This gives a 1:1 mapping of the vertices.
  • Later bind the stream out buffer as vertex buffer. Bind the index buffer of the original mesh.
  • draw with DrawIndexed like you would with the original mesh (or whatever draw call you had).

I know the reason why a point list as input is used, because when using the normal vertex topology as input the output would be a stream of "each of his own" primitives that would blow up the vertexbuffer. I assume a indexbuffer then would be needless ?

But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ?

In my VertexShader i first calculate the resulting transform matrix from bone indexes(4) und weights (4) and transform position and normal with the same resulting transform Matrix.

Do i have to run 2 passes ? One for transforming position and one for transforming normal ?

I think it could be done better ?

thanks for any help

 

Hi,

You can bind up to 4 simultaneous stream out targets, so if your vertex positions and normal in separate buffers, you can output both of them just fine. 

In the naïve implementation, I doubt you could get more performance out of it than the compute shader approach. But there is potentially an other use case here. You stream out when you also draw. For example you have multiple passes which want to reuse the animated vertex data. The naive approach would do first do a streamout, then subsequent rendering passes would use the result of that. The improved approach would be that the first rendering pass which requires animated vertex data also does animation, rendering and streamout at the same time, then the subsequent passes would only do rendering with the animated data which is already available.

And remember, you don't have to create a geometry shader for this if you are just animating. From DX10.1 I think you can provide a vertex shader for the CreateGeometryShaderWithStreamOutput function and it will work.

Advertisement
5 hours ago, evelyn4you said:

But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ?

You can't, unfortunately. You either have to accept having a fully expanded vertex buffer (which can be MUCH bigger than your original indexed VB), or you need to do a separate pass. Neither is ideal, really,

There is another option, but only if you're running on Windows 8 or higher: UAV's from the vertex shader. All you need to do is bind a structured buffer (or any kind of buffer) as a UAV, and then write to it from your vertex shader using SV_VertexID as the index. 

This topic is closed to new replies.

Advertisement