So, in the end, I'm not entirely sure why the above code doesn't work, but I believe it has something to do with the way that matrices are multiplied, which appears to vary, being either row-major or column-major. There's a much better description, which was posted as an answer to this questionthis question. Even with this in mind though, I was unable to fix the problem as my code stands, (possibly because of all the arbitrary transposing I've done near the middle of my code, which was inspired by some rather shoddy examples).
I then later stumbled upon another questionanother question regarding where I should be calculating my model-view-projection matrix at all. It seems that the best way to do such a task would actually be to do the calculation on the CPU and send that answer over to the GPU as a pre-processed matrix. I rewrote the hlsl file to this instead:
cbuffer MatrixBuffer
{
matrix worldViewProjMatrix;
};
[...]
PixelInputType main(VertexInputType input)
{
PixelInputType output;
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// transform output mosition to screen space using w-v-p matrix
output.position = mul(input.position, worldViewProjMatrix);
// Store the texture coordinates for the pixel shader.
output.tex = input.tex;
return output;
}
Then, as per the suggestion on the first question I referenced, I calculated the model-view-projection matrix on the CPU using the following:
Matrix worldViewMatrix = worldMatrix * viewMatrix;
Matrix worldViewProjMatrix = (worldViewMatrix * projectionMatrix);
worldViewProjMatrix.Transpose();
I then wrote this to the buffer. The way in which I actually acquired the world, view, and projection matrices was not the issue at all. Hopefully this helps somebody else who has the same issue later.