Advertisement

OpenGL vs DirectX attribute buffers

Started by April 16, 2020 05:41 PM
4 comments, last by 21st Century Moose 4 years, 9 months ago

After over a decade in OpenGL I finally poked at some DirectX. I'm curious how the driver deals with this if anyone knows. Might be confusing but the basics are my mesh data below doesn't have a w coordinate for obvious data reasons. In DirectX I can simply bind the position data as a float4, and just set the w coordinate. In OpenGL I am not allowed to modify the w coordinate of an input. I have to create a temporary vec4 from the input and set w to 1.

My assumption here is that in Directx the vertex shader input is already copied to a useable temp variable that I can modify without modifying the actual mesh buffer. In openGL my assumption is that this the input coming in is actually the input buffer itself which it will not let me modify or my mesh buffer would be changing.

Mesh buffer layout: [float x, float y, float z, float textureU, float textureV]

DirectX layout/shader:
struct Input{float4 position, float2 uv};
main(){ position.w = 1.0; // do stuff here}

OpenGL layout/shader (Works)
(layout location = 0) in vec3 position;
(layout location = 1) in vec2 uv;
main(){ float4 newPosition = vec4(position, 1.0); // do stuff} works


OpenGL layout/shader (fails compilation, can't modify varying)
(layout location = 0) in vec4 position;
(layout location = 1) in vec2 uv;
main(){ position.w = 1.0); // do stuff} // compilation error modifying input varying

NBA2K, Madden, Maneater, Killing Floor, Sims

Not looked deeply at the mechanics, but since ideally all the locals fit in the registers, I think it makes sense to write/change them (since these won't be written back to the original vertex buffer anyway). A write to memory is then only needed if the shader runs out of register space.

Advertisement

dpadam450 said:
In DirectX I can simply bind the position data as a float4, and just set the w coordinate.

I'm not too sure if thats a good idea though. The shader will probably have to do the same amount of work as if you'd later introduce a float4 for yourself. And it probably has to do some additional work for patching up the mismatching input layout. Ergo, I belive you are probably better off just having the position just as the data type it is inside the vertex buffer and later make it a float4 when you do the transform (Though it would be nice to hear from someone who has more experience with graphics drivers and such).

Yea its strange that it let me do that. I was working through a tutorial and I was confused how he was getting away with his buffer containing a [float3, float2] but shader as [float4, float2]

I also just validate that the inputPosition.w coordinate in the shader is actually not mapping to the next float in the buffer (which would be either the x,y texCoords depending on byte order). Like you said, it is probably doing some kind of mapping. inputPosition.w was always 1.0 regardless of the u,v values in the buffer.

NBA2K, Madden, Maneater, Killing Floor, Sims

This is well-documented but I can't find a link at present; inputs are expanded out with {0,0,0,1} as required without anything needing to be done by you. So it's valid to have float(n) in your buffer but float(n+x) as your shader input and the IA stage will automatically expand it, presumably when moving data from your buffer to the shader input regs.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement