Unable to create matrix representing extrinsic Euler rotation - c++

I'm trying to create a 4x4 affine transformation matrix from an Euler triplet and a rotation order. I want the resulting matrix to represent the rotation extrinsically (i.e. fixed frame). My application uses a left-handed coordinate frame and column vectors.
Following the maths provided by Wikipedia and VectorAlgebra, the equation should be:
However if I rotate an arbitrary amount on the zAxis, the following yAxis or zAxis rotations are performed on the object's local equivalent axes. This is the opposite of what I want!
The code I'm using is below:
Matrix4f Sy_3dMath::createRotationMatrix( Axis axis, float radians )
{
float c = cos( radians );
float s = sin( radians );
if ( axis == X ) {
return ( Matrix4f() << 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, c, -s, 0.0f,
0.0f, s, c, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f ).finished();
} else if ( axis == Y ) {
return ( Matrix4f() << c, 0.0f, s, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
-s, 0.0f, c, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f ).finished();
} else {
return ( Matrix4f() << c, -s, 0.0f, 0.0f,
s, c, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f ).finished();
}
}
Matrix4f Sy_3dMath::matrixFromEulerAngles( const Vector3f& euler,
Sy_3dMath::RotationOrder order )
{
EulerData ed = getEulerData( order );
return createRotationMatrix( static_cast< Axis >( ed.k ), euler( ed.k ) ) *
createRotationMatrix( static_cast< Axis >( ed.j ), euler( ed.j ) ) *
createRotationMatrix( static_cast< Axis >( ed.i ), euler( ed.i ) );
}
EulerData is a simple struct that contains 3 ints that represents the Axis index (i.e. 2 == zAxis == k, 0 == xAxis == i).
Can anyone see a flaw in my code or understanding? Linear algebra doesn't exactly come naturally for me...

I won't mark this as answer, as I don't understand why I'm seeing this behaviour when the whole point of extrinsic Euler rotations (as far as I can tell) is fixed world axis rotation, but Blender has exactly the same behaviour as my own app.
I'd like to think that someone in the last 20 years would have spotted a massive rotation flaw in Blender, so the behaviour I'm seeing must be expected...
If someone can explain why, I'll happily mark their answer as accepted!

Related

What is wrong with my matrix stack implementation (OpenGL ES 2.0)?

I am porting my OpenGL 1.1 application to OpenGL ES 2.0 and am writing a wrapper to implement the OpenGL 1.1 functions. My code seems to work fine until I start calling glPushMatrix() and glPopMatrix(). I think my understanding of how these should be implemented is incorrect.
Do I compute the final rotate/translate/scale before pushing it back on the stack? Should I keep only one modelview matrix (instead of separating it into three)? Are the transforms applied in the correct order?
Here is the code for my tranformation matrices
static std::vector<GLfloat> vertices;
static std::vector<std::vector<GLfloat>> rotationMatrixStack;
static std::vector<std::vector<GLfloat>> scalingMatrixStack;
static std::vector<GLfloat> rotationMatrix =
{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
static std::vector<GLfloat> scalingMatrix =
{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
static std::vector<GLfloat> translationMatrix =
{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
static std::vector<GLfloat> orthographicMatrix =
{
.0025f, 0.0f, 0.0f, -1.0f,
0.0f, .0025f, 0.0f, -1.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
void glTranslatef (GLfloat x, GLfloat y, GLfloat z)
{
float translation[] =
{
1.0f, 0.0f, 0.0f, x,
0.0f, 1.0f, 0.0f, y,
0.0f, 0.0f, 1.0f, z,
0.0f, 0.0f, 0.0f, 1.0f
};
multiplyMatrix(translation , &translationMatrix[0], &translationMatrix[0]);
}
void glScalef (GLfloat x, GLfloat y, GLfloat z)
{
float scaling[] =
{
x, 0.0f, 0.0f, 0.0f,
0.0f, y, 0.0f, 0.0f,
0.0f, 0.0f, z, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
multiplyMatrix(scaling , &scalingMatrix[0], &scalingMatrix[0]);
}
void glRotatef (GLfloat angle, GLfloat x, GLfloat y, GLfloat z)
{
glTranslatef(-x, -y, -z);
GLfloat radians = angle * M_PI/180;
float zRotation[] =
{
cos(radians), -sin(radians), 0.0f, 0.0f,
sin(radians), cos(radians), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
multiplyMatrix(zRotation , &rotationMatrix[0], &rotationMatrix[0]);
glTranslatef(x,y,z);
}
void glLoadIdentity (void)
{
rotationMatrix, scalingMatrix, translationMatrix =
{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
}
void multiplyMatrix(float* a, float* b, float* product)
{
int a_heigth = 4;
int a_width = 4;
int b_heigth = 4;
int b_width = 4;
int product_heigth = a_heigth;
int product_width = b_width;
float intermediateMatrix[product_heigth * product_width] = {0};
for (int product_row = 0; product_row < product_heigth; product_row++)
{
for (int product_column = 0; product_column < product_width; product_column++)
{
float value = 0;
//std::cout << "r[" << (product_row*product_width) + product_column << "] = ";
for (int multiplication_index = 0; multiplication_index < a_width ; multiplication_index++)
{
value += a[(product_row * a_width) + multiplication_index] * b[product_column + (b_heigth * multiplication_index)];
//std::cout << "( a[" << (product_row * a_width) + multiplication_index << "] * b[" << product_column + (b_heigth * multiplication_index) << "] ) + ";
}
//std::cout << std::endl;
intermediateMatrix[(product_row*product_width) + product_column] = value;
}
}
for (int i = 0; i < product_heigth * product_width; i++)
{
product[i] = intermediateMatrix[i];
}
}
Here is the code for the matrix stack
static std::vector<std::vector<GLfloat>> translationMatrixStack;
void glPushMatrix()
{
rotationMatrixStack.push_back(rotationMatrix);
scalingMatrixStack.push_back(scalingMatrix);
translationMatrixStack.push_back(translationMatrix);
}
void glPopMatrix()
{
rotationMatrix = rotationMatrixStack.back();
scalingMatrix = scalingMatrixStack.back();
translationMatrix = translationMatrixStack.back();
rotationMatrixStack.pop_back();
scalingMatrixStack.pop_back();
translationMatrix.pop_back();
}
And here is the vertex shader code
attribute highp vec4 myVertex;
uniform mediump mat4 orthographicMatrix;
uniform mediump mat4 translationMatrix;
uniform mediump mat4 scalingMatrix;
uniform mediump mat4 rotationMatrix;
void main(void)
{
gl_Position = orthographicMatrix * translationMatrix * scalingMatrix * rotationMatrix * ( myVertex) ;
}";
You do not have a separate matrix stack for rotation, translation and scaling. In OpenGL there is one matrix stack for each matrix mode (See glMatrixMode). The matrix modes are GL_MODELVIEW, GL_PROJECTION, and GL_TEXTURE.
See the documentation of glTranslate:
glTranslate produces a translation by x y z . The current matrix (see glMatrixMode) is multiplied by this translation matrix, with the product replacing the current matrix.
the documentation of glRotate:
glRotate produces a rotation of angle degrees around the vector x y z . The current matrix (see glMatrixMode) is multiplied by a rotation matrix with the product replacing the current matrix.
and the documentation of glScale:
glScaleproduces a nonuniform scaling along the x, y, and z axes. The three parameters indicate the desired scale factor along each of the three axes.
The current matrix (see glMatrixMode) is multiplied by this scale matrix.
This means you need one matrix stack, and all operations operate on the same matrix stack.
Note, a matrix multiplication C = A * B works like this:
Matrix4x4 A, B, C;
// C = A * B
for ( int k = 0; k < 4; ++ k )
for ( int j = 0; j < 4; ++ j )
C[k][j] = A[0][l] * B[k][0] + A[1][j] * B[k][1] + A[2][j] * B[k][2] + A[3][j] * B[k][3];
A 4*4 matrix looks like this:
c0 c1 c2 c3 c0 c1 c2 c3
[ Xx Yx Zx Tx ] [ 0 4 8 12 ]
[ Xy Yy Zy Ty ] [ 1 5 9 13 ]
[ Xz Yz Zz Tz ] [ 2 6 10 14 ]
[ 0 0 0 1 ] [ 3 7 11 15 ]
And the memory image of a 4*4 matrix looks like this:
[ Xx, Xy, Xz, 0, Yx, Yy, Yz, 0, Zx, Zy, Zz, 0, Tx, Ty, Tz, 1 ]
This means you have to adapt your matrix operations:
static std::vector<std::vector<GLfloat>> modelViewMatrixStack;
static std::vector<GLfloat> modelViewMatrix{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
void multiplyMatrix( float A[], float B[], float P[] )
{
float C[16];
for ( int k = 0; k < 4; ++ k ) {
for ( int l = 0; l < 4; ++ l ) {
C[k*4+j] =
A[0*4+j] * B[k*4+0] +
A[1*4+j] * B[k*4+1] +
A[2*4+j] * B[k*4+2] +
A[3*4+j] * B[k*4+3];
}
}
std::copy(C, C+16, P);
}
void glTranslatef( GLfloat x, GLfloat y, GLfloat z )
{
float translation[]{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
x, y, z, 1.0f };
multiplyMatrix(&modelViewMatrix[0], translation, &modelViewMatrix[0]);
}
void glScalef( GLfloat x, GLfloat y, GLfloat z )
{
float scaling[]{
x, 0.0f, 0.0f, 0.0f,
0.0f, y, 0.0f, 0.0f,
0.0f, 0.0f, z, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
multiplyMatrix(&modelViewMatrix[0], scaling, &modelViewMatrix[0]);
}
void glRotatef( GLfloat angle, GLfloat x, GLfloat y, GLfloat z )
{
float radians = angle * M_PI/180;
float c = cos(radians);
float s = sin(radians);
float rotation[16]{
x*x*(1.0f-c)+c, x*y*(1.0f-c)-z*s, x*z*(1.0f-c)+y*s, 0.0f,
y*x*(1.0f-c)+z*s, y*y*(1.0f-c)+c, y*z*(1.0f-c)-x*s, 0.0f,
z*x*(1.0f-c)-y*s z*y*(1.0f-c)+x*s, z*z*(1.0f-c)+c, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
multiplyMatrix(&rotationMatrix[0], rotation, &rotationMatrix[0]);
}
See further:
GLSL 4×4 Matrix Fields
GLSL Programming/Vector and Matrix Operations
Data Type (GLSL)

glm::lookAt returns matrix with nan elements

I want to create a view matrix for a camera which perpendicularly look at the ground:
glm::mat4 matrix = glm::lookAt(glm::vec3(0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
The last argument is the global up vector so everything seems to be correct but I get following matirx:
-nan -nan -0 0
-nan -nan 1 0
-nan -nan -0 0
nan nan -1 1
I guess that I get nan because a look at vector is parallel to up vector, but how can I build a correct view matrix using glm::lookAt function.
The problem is with either your camera's position, or the up vector.
Your camera is 1 unit up (0,1,0), looking down at the origin (0,0,0). The up vector indicates the up direction of the camera, not the world space. For example, if you're looking forward, the up vector would be +Y. If you're looking down, with the top of your head facing +X, then the up vector is +X to you. It has to be something that's not at all parallel with the position vector of the camera.
Solutions:
Changing the up vector to anything along the XZ plane
or to something that's not (0,0,0) when projected onto the XZ plane
Move your camera so that it's anywhere but along the Y axis
In lookAt it is impossible to have the viewing direction and the up-vector looking in the same direction. If you want to have a camera that is looking along the negative y-axis, you'll have to adjust the up-vector, for example to [0,0,1]. The direction one specifies in the up-vector controls how the camera is rotated around the view axis.
I ran across this same problem of NaNs in the matrix returned by glm::lookAt() yesterday and have concocted what I think is a workaround. This seems to work for me for the particular problem of the UP vector being vec3(0.0f, 1.0f, 0.0f), which seems to be a common use case.
My Vulkan code looks like this:
struct UniformBufferObject {
alignas(16) glm::mat4 model;
alignas(16) glm::mat4 view;
alignas(16) glm::mat4 proj;
};
...
UniformBufferObject ubo{};
...
glm::vec3 cameraPos = glm::vec3(0.0f, 2.0f, 0.0f);
ubo.view = glm::lookAt(cameraPos, glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
// if the direction vector from the camera to the point being observed ends up being parallel to the UP vector
// glm::lookAt() returns a mat4 with NaNs in it. to workaround this, look for NaNs in ubo.view
int view_contains_nan = 0;
for (int col = 0; (col < 4) && !view_contains_nan; ++col) {
for (int row = 0; (row < 4) && !view_contains_nan; ++row) {
if (std::fpclassify(ubo.view[col][row]) == FP_NAN) {
view_contains_nan = 1;
}
}
}
// if we ended up with NaNs, the workaround ubo.view that seems to work depends on the sign of the camera position Y
if (view_contains_nan) {
std::cout << "view contains NaN" << std::endl;
if (cameraPos.y >= 0.0f) {
ubo.view = glm::mat4( -0.0f, -1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
-1.0f, 0.0f, -0.0f, 0.0f,
-0.0f, -0.0f, -cameraPos.y, 1.0f);
} else {
ubo.view = glm::mat4( 0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, -1.0f, 0.0f,
-1.0f, 0.0f, -0.0f, 0.0f,
-0.0f, -0.0f, cameraPos.y, 1.0f);
}
}
Hopefully it works for you too, though I suppose it would be nice if glm::lookAt() could be fixed to not return matrices with NaNs in it.

how to convert world coordinates to screen coordinates

I am creating a game that will have 2d pictures inside a 3d world.
I originally started off by not caring about my images been stretched to a square while I learnt more about how game mechanics work... but it's now time to get my textures to display in the correct ratio and... size.
Just a side note, I have played with orthographic left hand projections but I noticed that you cannot do 3d in that... (I guess that makes sense... but I could be wrong, I tried it and when I rotated my image, it went all stretchy and weirdosss).
the nature of my game is as follows:
In the image it says -1.0 to 1.0... i'm not fussed if the coordinates are:
topleft = 0,0,0
bottom right = 1920, 1200, 0
But if that's the solution, then whatever... (p.s the game is not currently set up so that -1.0 and 1.0 is left and right of screen. infact i'm not sure how i'm going to make the screen edges the boundaries (but that's a question for another day)
Question:
The issue I am having is that my image for my player (2d) is 128 x 64 pixels. After world matrix multiplication (I think that's what it is) the vertices I put in scale my texture hugely... which makes sense but it looks butt ugly and I don't want to just whack a massive scaling matrix into the mix because it'll be difficult to work out how to make the texture 1:1 to my screen pixels (although maybe you will tell me it's actually how you do it but you need to do a clever formula to work out what the scaling should be).
But basically, I want the vertices to hold a 1:1 pixel size of my image, unstretched...
So I assume I need to convert my world coords to screen coords before outputting my textures and vertices??? I'm not sure how it works...
Anyways, here are my vertices.. you may notice what I've done:
struct VERTEX
{
float X, Y, Z;
//float R, G, B, A;
float NX, NY, NZ;
float U, V; // texture coordinates
};
const unsigned short SquareVertices::indices[ 6 ] = {
0, 1, 2, // side 1
2, 1, 3
};
const VERTEX SquareVertices::vertices[ 4 ] = {
//{ -1.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
//{ 1.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
//{ -1.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
//{ 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
{ -64.0f, -32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
{ 64.0f, -32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -64.0f, 32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 64.0f, 64.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
(128 pixels / 2 = 64 ), ( 64 / 2 = 32 ) because the centre is 0.0... but what do I need to do to projections, world transdoobifications and what nots to get the worlds to screens?
My current setups look like this:
// called 1st
void Game::SetUpViewTransformations( )
{
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -20.0f, 0 );
XMVECTOR vecCamLookAt = XMVectorSet( 0, 0, 0, 0 );
XMVECTOR vecCamUp = XMVectorSet( 0, 1, 0, 0 );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
}
// called 2nd
void Game::SetUpMatProjection( )
{
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
windowWidth / windowHeight, // aspect ratio
1, // the near view-plane
100 ); // the far view-plan
}
and here is a sneaky look at my update and render methods:
// called 3rd
void Game::Update( )
{
world->Update();
worldRotation = XMMatrixRotationY( world->rotation );
player->Update( );
XMMATRIX matTranslate = XMMatrixTranslation( player->x, player->y, 0.0f );
//XMMATRIX matTranslate = XMMatrixTranslation( 0.0f, 0.0f, 1.0f );
matWorld[ 0 ] = matTranslate;
}
// called 4th
void Game::Render( )
{
// set our new render target object as the active render target
d3dDeviceContext->OMSetRenderTargets( 1, rendertarget.GetAddressOf( ), zbuffer.Get( ) );
// clear the back buffer to a deep blue
float color[ 4 ] = { 0.0f, 0.2f, 0.4f, 1.0f };
d3dDeviceContext->ClearRenderTargetView( rendertarget.Get( ), color );
d3dDeviceContext->ClearDepthStencilView( zbuffer.Get( ), D3D11_CLEAR_DEPTH, 1.0f, 0 ); // clear the depth buffer
CBUFFER cBuffer;
cBuffer.DiffuseVector = XMVectorSet( 0.0f, 0.0f, 1.0f, 0.0f );
cBuffer.DiffuseColor = XMVectorSet( 0.5f, 0.5f, 0.5f, 1.0f );
cBuffer.AmbientColor = XMVectorSet( 0.2f, 0.2f, 0.2f, 1.0f );
//cBuffer.Final = worldRotation * matWorld[ 0 ] * matView * matProjection;
cBuffer.Final = worldRotation * matWorld[ 0 ] * matView * matProjection;
cBuffer.Rotation = XMMatrixRotationY( world->rotation );
// calculate the view transformation
SetUpViewTransformations();
SetUpMatProjection( );
//matFinal[ 0 ] = matWorld[0] * matView * matProjection;
UINT stride = sizeof( VERTEX );
UINT offset = 0;
d3dDeviceContext->PSSetShaderResources( 0, 1, player->texture.GetAddressOf( ) ); // Set up texture
d3dDeviceContext->IASetVertexBuffers( 0, 1, player->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( player->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &cBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( player->indices ), 0, 0 ); // draw
swapchain->Present( 1, 0 );
}
Just to clarify, if I make my vertices use 2 and 1 respective of the fact my image is 128 x 64.. I get a normal looking size image.. and yet at 0,0,0 it's not at 1:1 size... wadduuuppp buddyyyy
{ -2.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
{ 2.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -2.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 2.0f, 2.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
Desired outcome 2 teh max:
Cool picture isn't it :D ?
Comment help:
I'm not familliar with direct-x but as far as I can see the thing with your image that is screen coordinates are [-1...+1] on x and y. So total length on both axis equals 2 and your image is scaled times 2. Try consider this scale in camera matrix.

DirectX 11 Vertices Coordinates

I'm working on my first C++ and DirectX 11 project where my current goal is to draw a colored triangle on the screen. This has worked well without any problems. However, there's one part that I would like to change, but I don't know how. I've tried searching for a solution but I havn't found any yet, and I guess that the reason is because I don't really know what I should search for.
Currently I set up my triangles 3 vertices like this:
VertexPos vertices[] =
{
{ XMFLOAT3( 0.5f, 0.5f, 1.0f )},
{ XMFLOAT3( 0.5f, -0.5f, 1.0f )},
{ XMFLOAT3( -0.5f, -0.5f, 1.0f )},
}
Where VertexPos is defined like this:
struct VertexPos
{
XMFLOAT3 pos;
};
Currenty, my vertices position is set up by the range -1.0F to 1.0F, where 0.0F is the center. How can I change this so that I can position my vertices by using "real" coordinates like this:
VertexPos vertices[] =
{
{ XMFLOAT3( 100.0f, 300.0f, 1.0f )},
{ XMFLOAT3( 200.0f, 200.0f, 1.0f )},
{ XMFLOAT3( 200.0f, 300.0f, 1.0f )},
}
Usual way:
create orthogonal projection matrix (XMMatrixOrthographicLH() if you using XMMATH)
create constant buffer with this matrix on CPU side (C++ code) and on GPU size (vertex shader)
multiply vertex positions by orthogonal projection matrix in vertex shader
Simpler way (from F.Luna book):
XMFLOAT3 SpriteBatch::PointToNdc(int x, int y, float z)
{
XMFLOAT3 p;
p.x = 2.0f*(float)x/mScreenWidth - 1.0f;
p.y = 1.0f - 2.0f*(float)y/mScreenHeight;
p.z = z;
return p;
}
Almost the same, but on CPU side. Of course, you can move this code to shader too.
P.S. Probably your book/manual/tutorials will learn you about it a little later. So, you better trust it and flow step-by step.
Happy coding! =)

DirectX: Small distortion between 2 sprite polygons

Hello I use the same way to render sprites with directx from a long time but here I am rendering the screen in a texture and then render it with a big sprite on the screen.
For the camera I use that:
vUpVec=D3DXVECTOR3(0,1,0);
vLookatPt=D3DXVECTOR3(0,0,0);
vFromPt=D3DXVECTOR3(0,0,-1);
D3DXMatrixLookAtRH( &matView, &vFromPt, &vLookatPt, &vUpVec );
g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView );
D3DXMatrixOrthoRH( &matProj, 1,1, 0.5f, 20 );
g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
And to render the sprite:
CUSTOMVERTEX* v;
spritevb->Lock( 0, 0, (void**)&v, 0 );
v[0].position = D3DXVECTOR3(-0.5f,-0.5f,0); v[0].u=0; v[0].v=1;
v[1].position = D3DXVECTOR3(-0.5f,0.5f,0); v[1].u=0; v[1].v=0;
v[2].position = D3DXVECTOR3(0.5f,-0.5f,0); v[2].u=1; v[2].v=1;
v[3].position = D3DXVECTOR3(0.5f,0.5f,0); v[3].u=1; v[3].v=0;
spritevb->Unlock();
g_pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
This is very basic and works, my sprite is rendered on the screen full.
But by looking closer I see that there's a small diagonal line through the screen (between the 2 polygons) not a colored one but like if them weren't perfectly positionned.
I thought about filtering and tried removing everything but maybe I forget something...
Thanks
To render to full screen best way is to not define any camera positions.
If you use as input positions
SimpleVertex vertices[] =
{
{ XMFLOAT3( -1.0f, 1.0f, 0.5f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, 0.5f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 0.5f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, 0.5f ), XMFLOAT2( 0.0f, 1.0f ) },
};
and in the Vertex Shader do
VS_OUTPUT RenderSceneVS( VS_INPUT input )
{
VS_OUTPUT Output;
Output.Position = input.Position;
Output.TextureUV = input.TextureUV;
return Output;
}
you get a render to full screen as well without having to worry about the viewing frustrum. Using this I never saw any lines between the two triangles.