I have my WVP matrix = World * View * Projection in direct3D. If directx stores its matrices in row major style, and the hlsl mul() function can have mul(vertex, matrix) as row major style multiplication, then why must I tranpose the directx matrix before sending to the hlsl vertex shader file? also, how can I have directx send in column major matrices and have those properly multiply out? I've tried WVP matrix = Projection * view * world and then sent that into the vertex shader which uses mul(wvp, vertex) which should be column major and it doesn't work!
HLSL mul() and row/column major matricies in directx
Matrix row/column major property doesn't imply specific storage layout. Well, when speaking about math, and not about computer language.
Perhaps you're using some math lib that stores matrices by columns, or maybe you've generated matrices initially transposed.
If I understand your description, you're assuming that you can generate the transpose of a matrix product only by reversing the order of multiplication. However,
N.B.: Projection*View*World is NOT equivalent to (World*View*Projection)T
(World * View * Projection)T = ProjectionT * ViewT * WorldT, that's the product of the transposes in reverse order (EDIT: not the transpose of the product), and is not equivalent to Proj * View * World.
I.e., PT * VT * WT is what you want to use in the shader, but, even using the transpose of Proj * View * World results in WT * VT *PT - not the same.
Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.
You don't forget how to play when you grow old; you grow old when you forget how to play.
Btw, see this: http://msdn.microsoft.com/en-us/library/windows/desktop/bb509634%28v=vs.85%29.aspx
HLSL uses row-vector column-major math by default. I.e. it expects matrix to be column-major packed in order to do transform in most efficient way, with dot instructions.
I am using xnamath, and I think this stores its matrices in row-major order. my view of w*v*p = (w*v*p)T is wrong when wondering if you could multiply pvw * vertex using column major. I am wondering now, why do I have to transpose the wvp matrix before sending to hlsl vertex shader? how does hlsl mul() function work, that is, how is mul(vertex, wvp) different from mul(wvp, vertex)? thanks
why do I have to transpose the wvp matrix before sending to hlsl vertex shader?
Because hlsl by default expects column-major packed matrix, and not row-major.
Having single column in one constant register allows to quickly calc resulting vector component with one dot instruction.
Single mul(row-vector, column-major-matrix) will be calculated as:
dp4 oPos.x, v0, c0
dp4 oPos.y, v0, c1
dp4 oPos.z, v0, c2
dp4 oPos.w, v0, c3
while doing mul(column-major-matrix, row-vector) will be calculated as:
mul r0, v0.y, c1
mad r0, c0, v0.x, r0
mad r0, c2, v0.z, r0
mad oPos, c3, v0.w, r0
You can change default behavior with fxc options /Zpr, /Zpc.
/Zpr will inform hlsl compiler that matrices will be packed in row-major order, and you will have to to mul(row-major-matrix, column-vector)
how does hlsl mul() function work, that is, how is mul(vertex, wvp) different from mul(wvp, vertex)?
If you google for "hlsl mul function," you, too, can find this link.
Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.
You don't forget how to play when you grow old; you grow old when you forget how to play.
Btw, see this: http://msdn.microsoft.com/en-us/library/windows/desktop/bb509634%28v=vs.85%29.aspx
HLSL uses row-vector column-major math by default. I.e. it expects matrix to be column-major packed in order to do transform in most efficient way, with dot instructions.
How can I send in the WVP matrix, not having to transpose it?
How can I send in the WVP matrix, not having to transpose it?
You can either specify /Zpr when compiling shader, or add "row_major" type modifier to wvp variable.
In both cases you will have to rewrite shader to multiply matrix by vector, and not vector by matrix.
Mathematical "row major" matrix:
x1,x2,x3,0
y1,y2,y3,0
z1,z2,z3,0
p1,p2,p3,1
Mathematical "column major" matrix:
x1,y1,z1,p1
x2,y2,z2,p2
x3,y3,z3,p3
0, 0, 0, 1
When you choose between these, it determines whether you'll be writing:
vecD = vecC * matrixA * matrixB
Or
vecD = matrixB * matrixA * vecC
Computer Science row/column major defines how you store arrays in memory.
Given the data-driven
ABCD
EFGH
IJKL
MNOP
A comp-sci "row major" storage order is:
ABCDEFGHIJKLMNOP
A comp-sci "column major" storage order is:
AEIMBFJNCGKODHLP
The storage order has no impact on the maths at all. If youre changing the maths because of the comp-sci majorness, then something is wrong.
HLSL uses comp-sci column-major array storage by default (but you can override this with the row_major keyword).
HLSL does not pick a mathematical convention for you - that's just determined by the way you do your maths.
On the C++ side, the maths library that you're using will determine both the array storage convention and the mathematical convention.
What math library are you using?
. 22 Racing Series .