DirectX, scaling orthgraphic vertices to world space vertices - c++

Introduction
Let's say I have the following vertices:
const VERTEX World::vertices[ 4 ] = {
{ -960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1 screen coordinates centred
{ 960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -960.0f, 600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 960.0f, 960.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
You may have guessed that 960 * 2 is 1920.. which is the width of my screen and same goes for 600 * 2 being 1200.
These vertices represent a rectangle that will fill up my ENTIRE screen where the origin is in the centre of my screen.
Issue
So up until now, I have been using an Orthographic view without a projection:
matView = XMMatrixOrthographicLH( Window::width, Window::height, -1.0, 1.0 );
Any matrix that was being sent to the screen was multiplied by matView and it seemed to work great. More specifically, my image; using the above vertices array, fit snugly in my screen and was 1:1 pixels to its original form.
Unfortunately, I need 3D now... and I just realised i'm going to need some projection... so I prepared this little puppy:
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -1.0f, 0 ); // I did try setting z to -100.0f but that didn't work well as I can't scale it back any more... and it's not accurate to the 1:1 pixel I want
XMVECTOR vecCamLookAt = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f );
XMVECTOR vecCamUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
Window::width / Window::height, // aspect ratio
1.0f, // the near view-plane
100.0f ); // the far view-plan
You may already know what the problem is... but if not, I have just set my field of view to 45 degrees.. this'll make a nice perspective and do 3d stuff great, but my vertices array is no longer going to cut the mustard... because the fov and screen aspect have been greatly reduced (or increased ) so the vertices are going to be far too huge for the current view I am looking at (see image)
I was thinking that I need to do some scaling to the output matrix to scale the huge coordinates back down to the respective size my fov and screen aspect is now asking for.
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
Goal
I'm trying to avoid changing every single objects vertices array to a rough scaled prediction of what the original image would look like in world space.
Some extra info
I just wanted to clarify what kind of matrix operations I am currently doing to the world and then how I render using those changes... so this is me doing some translations on my big old background image:
// flipY just turns the textures back around
// worldTranslation is actually the position for the background, so always 0,0,0... as you can see 0.5 was there to make sure I ordered things that were drawn correctly when using orthographic
XMMATRIX worldTranslation = XMMatrixTranslation( 0.0f, 0.0f, 0.5f );
world->constantBuffer.Final = flipY * worldTranslation * matView;
// My thoughts are on this that somehow I need to add a scaling to this matrix...
// but if I add scaling here... it's going to mean every other game object
// (players, enemies, pickups) are going to need to be scaled... and really I just want to
// have to scale the whole lot the once.. right?
And finally this Is where it is drawn to the screen:
// Background
d3dDeviceContext->PSSetShaderResources( 0, 1, world->textures[ 0 ].GetAddressOf( ) ); // Set up texture of background
d3dDeviceContext->IASetVertexBuffers( 0, 1, world->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer (do I need the scaling here?)
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( world->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &world->constantBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( world->indices ), 0, 0 ); // draw
and then what I have done to apply my matProjection which has supersized all my vertices
world->constantBuffer.Final = flipY * worldTranslation * matView * matProjection; // My new lovely projection and view that make everything hugeeeee... I literally see like 1 pixel of the background brah!
Please feel free to take a copy of my game and run it as is (Windows 8 application Visual studio 2013 express project) in the hopes that you can help me out with putting this all into 3D: https://github.com/jimmyt1988/FlyGame/tree/LeanerFramework/Game

its me again. Let me try to clear a few things up
1
Here is a little screenshot from an editor of mine:
I have edited in little black boxes to illustrate something. The axis circles you see around the objects are rendered at exactly the same size. However, they are rendered through a perspective projection. As you can see, the one on the far left is something like twice as large as the one in the center. This is due purely to the nature of a projection like that. If this is unacceptable, you must use a non-perspective projection.
2
The only way it is possible to maintain a 1:1 ratio of screenspace to uv space is to have the object rendered at 1 pixel on screen per 1 pixel on texture. There is nothing more to it than that. However, what you can do is change your texture filter options. Filter options are designs specifically for rendering non 1:1 ratios. For example, the code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Tells opengl: "If you are told to sample a pixel, don't interpolate anything. Just take the nearest value on the texture and paste it on the screen.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
This code, however, does something much better: it interpolates between pixels. I think this may be what you want, if you aren't already doing it.
These pictures (taken from http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/) show this:
Nearest:
Linear:
Here is what it comes down to. You request that:
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
What you are requesting is by definition not possible, so you have to look for alternative solutions. I hope that helps somewhat.

Related

shapes skewed when rotated, using openGL, glm math, orthographic projection

For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);

Same VBO for all sprites in a 2D game?

I'm writing a small 2D game-engine (educative purpose) in C++ and OpenGL 3.3, while writing the code I noted that almost all sprites (if not all) use the same vertexBuffer values:
const float vertexBuffer[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f
}
That is 2 triangles (if using VBO indexing) in model space that form a square, the indexBuffer goes like:
const unsigned short indexBuffer[] = { 0, 1, 2, 2, 0, 3 }
Why I'm using the same model-space values for all my sprites? Well I use a different MVP matrix for all of them:
P (projection): The orthogonal camera transform, usually with the same width and height of the glContext.
V (view): A lookAt transformation, it just sits in the z axis looking to the xy plane perpendicullary. This is also used to move the camera (follow the player, etc).
M (model): this matrix is created using transformations belonging to each sprite:
glm::mat4 model = <translate> * <rotate> * <scale>
Where:
<translate> is the position of the sprite in screen-space
<rotate> the rotation of the sprite
<scale> The size of the sprite in pixels divided by 2. Why? Each corner of the model-space corresponds to a vertex, and the square formed by these with its center in the origin, so if our sprite is 250x250 pixels, we scale by 125px to each side in each axis, thus transforming our model-space square to a screen-space square.
So, if I have 5 sprites I'll call glDrawElements 5 times, with differents MVPs and Textures each time, but same vertexBuffer, indexBuffer and uvCoordinates.
Do you think this is a error-prone approach for using in the future? Or should I instead apply the <translate> and <scale> transformations directly to the vertices when creating them? And leave the Model matrix with only the rotation.

How do I specify texture coordinates for a GL_QUAD_STRIP?

I am trying to understand how to specify texture coordinates for a GL_QUAD_STRIP.
I have managed to get things working for one quad:
float vertices[] = { 0.0f, 0.0f, 1.0f, +1.0f, 0.0f, 0.0f, // bottom line
0.0f, 1.0f, 1.0f, +1.0f, 1.0f, 0.0f}; // top line
unsigned int indices[] = {2, 0, // x = 0
3, 1}; // x = +1
float textureCoordinates[] = { 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f};
...
glBindBuffer(GL_ARRAY_BUFFER, 0); // unbinds any buffer object previously bound
glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibufferid);
glDrawElements(GL_QUAD_STRIP, 4, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
And here is how the result looks (white rectangle with image, rest is drawn on to help explain):
However I do not understand the logic behind the choice of textureCoordinates[] :-(.
The first texture coordinate is (1,0); I would assume that this corresponds to lower right corner?
Also I would assume that when OpenGL reads the first index: 2, it uses this to look up the vertex: (0,1,1): upper left corner. Next it reads the first texture coordinate: (1,0).
But as mentioned above I would assume this to be the lower right corner of the texture !?
However the texture is shown unrotated so this can not be the case!?
Just like the vertices, the texture coordinates are also selected based on the indices used by glDrawElements(). So the first texture coordinate is not (1,0), but (1,1) because the first index is 2. Vertices and coordinates would be according to the following table, where i = index, v = vertex and t = texture coordinate. (I'll only take the x and y coordinates into consideration for the vertices, as the z coordinate doesn't really matter in this case.)
i v t
2 (0,1) (1,1)
0 (0,0) (1,0)
3 (1,1) (0,1)
1 (1,0) (0,0)
If we draw this on a piece of paper, we can see that this means the coordinates make more sense, since the indices matter. (I recommend that you do this! I had to do that to understand what was going on.) Notice in the table how the y coordinates match perfectly between the vertex and texture coordinate for a given index. But the x coordinates don't match: when the vertex has x = 0, the texture coordinate has x = 1 and vice versa. I assume this would make the image appear mirrored around the y axis instead of rotated in any way. What does the original image look like? Is it mirrored compared to what we see in the image you posted so that the building is on the left? If so, the texture coordinates would be the explanation. In that case, texture coordinate 2 and 3 should switch places.
In case you are curious, you could take a look at the OpenGL 2.1 specification on page 18, Figure 2.5(a), to see why the vertex indices were selected as they were. It would create a quad with vertices specified in a counterclockwise direction when projected on the screen. This is good because the initial value for glFrontFace() is GL_CCW, which means we see the front face of the polygons in the rendered image and the polygons would not have been culled if culling was enabled (see glCullFace()). (Culling is not enabled by default though, so it may or may not have mattered in your case.)
I hope this helped. Do comment if something is unclear!

Putting a texture on one surface of a cube isn't working

I'm trying to put a texture on one surface of a cube (if facing the XY plane the texture would be facing you).
No texture is getting drawn, only the wireframe and I'm wondering what I'm doing wrong. I think it's the vertex coordinates?
Here's some code:
struct paperVertex {
D3DXVECTOR3 pos;
DWORD color; // The vertex color
D3DXVECTOR2 texCoor;
paperVertex(D3DXVECTOR3 p, DWORD c, D3DXVECTOR2 t) {pos = p; color = c; texCoor = t;}
paperVertex() {pos = D3DXVECTOR3(0,0,0); color = 0; texCoor = D3DXVECTOR2(0,0);}
};
D3DCOLOR color1 = D3DCOLOR_XRGB(255, 255, 255);
D3DCOLOR color2 = D3DCOLOR_XRGB(200, 200, 200);
vertices[0] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,0)); // bottom left corner of tex
vertices[1] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,0)); // top left corner of tex
vertices[2] = paperVertex(D3DXVECTOR3( 1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,1)); // top right corner of tex
vertices[3] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,1)); // bottom right corner of tex
vertices[4] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
vertices[5] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[6] = paperVertex(D3DXVECTOR3(1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[7] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
D3DXCreateTextureFromFile( md3dDev, "texture.bmp", &gTexture);
md3dDev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);
md3dDev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
md3dDev->SetTexture(0, gTexture);
md3dDev->SetStreamSource(0, mVtxBuf, 0, sizeof(paperVertex));
md3dDev->SetVertexDeclaration(paperDecl);
md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME);
md3dDev->SetIndices(mIndBuf);
md3dDev->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, VTX_NUM, 0, NUM_TRIANGLES);
disclaimer: I have no Direct3D experience, but solid OpenGL and general computer graphics experience. And since the underlying concepts don't really differ, I attempt an answer, of whose correctness I'm 99% sure.
You call md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME) immediately before rendering and wonder why only the wireframe is drawn?
Keep in mind that using a texture doesn't magically turn a wireframe model into a solid model. It is still a wireframe model with the texture only applied to the wireframe. You can only draw the whole primitve as wireframe or not.
Likewise does using texture coordinates of (0,0) not magically disable texturing for individual faces. You can only draw the whole primitive textured or not, though you might play with the texture coordinates and the texture's wrapping mode (and maybe the texture border) to make the "non-textured" faces use a uniform color from the texture and thus look like not textured.
But in general to achieve such deviating render styles (like textured/non-textured, but especially wireframe/solid) in a single primitive, you won't get around splitting the primitive into multiple ones and drawing each one with its dedicated render style.
EDIT: According to your comment: If you don't need wireframe, why enable it then? Besides disabling wireframe, with your current texture coordinates the other faces won't just have a single color from the texture but some strange distorted version of the texture. This is because your vertices (and their texture coordinates) are shared between different faces, but the texture coordinates at the moment are created only for the front face to look reasonable.
In such a situation, you won't get around duplicating vertices, so that each face uses a set of 4 unique vertices. In the case of a cube you won't actually need an index array anymore, because each face needs its own vertices. This is due to the fact, that a vertex conceptually represents all of the vertex' attributes (position, color, texCoord, ...) and you cannot have a two vertices sharing a position but having different texture coordinates (you can but you need two distinct vertices). Once you've duplicated the vertices accordingly, you can give each of the corner vertices their respective texture coordinates (which would be the usual [0,1]-quad if you want them textured normally, or all 0s if you want them to have a single color, in this case the color of the bottom left (or top left in D3D?) corner of the texture).
The same problem arises if you want to light the cube and need normals per-face, istead of interpolated per-vertex normals. In this case you also have to introduce duplicate vertices only deviating in their normal attribute. Always keep in mind that a vertex conceptually consists of all the vertex attributes and if two vertices have the same position but a different color/normal/texCoord/... they are conceptually (and practically) different vertices.

3D object in front of 2D Sprite (background) , how?

I'm new to Direct3D and I was on a project taking pictures from a webcam and draw some 3D objects in front of it.
I was able to render webcam images as background using Orthogonal Projection.
//init matrix
D3DXMatrixOrthoLH(&Ortho, frameWidth, frameHeight, 0.0f, 100.0f);
//some code
D3DXVECTOR3 position = D3DXVECTOR3(0.0f, 0.0f, 100.0f);
g_pSprite->Begin(D3DXSPRITE_OBJECTSPACE);
g_pSprite->Draw(g_pTexture,NULL,&center,&position,0xFFFFFFFF);
g_pSprite->End();
Then I tried to insert a simple triangle in front of it. The Matrices are setup as follow
D3DXMATRIXA16 matWorld;
D3DXMatrixTranslation( &matWorld, 0.0f,0.0f,5.0f );
g_pd3dDevice->SetTransform( D3DTS_WORLD, &matWorld );
D3DXMATRIXA16 matProj;
D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f );
g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
5.0 should be < 100.0 and the triangle is supposed to be appear in front of the images. However it does not appear unless set the z position to 0. At position 0, i can see the triangle but background is blank.
Do you guys have any suggestions?
I would not draw the webcam image in the object space (D3DXSPRITE_OBJECTSPACE) if you intend to use your image solely for background purpose; something like
D3DXVECTOR3 backPos (0.f, 0.f, 0.f);
pBackgroundSprite->Begin(D3DXSPRITE_ALPHABLEND);
pBackgroundSprite->Draw (pBackgroundTexture,
0,
0,
&backPos,
0xFFFFFFFF);
pBackgroundSprite->End();
should hopefully do what you're looking for.
As a quick fix you could disable depth testing as follows;
g_pd3dDevice->SetRenderState(D3DRS_ZENABLE, D3DZB_FALSE);
This way the z-index of the primitives being drawn should reflect the order in which they are drawn.
Also, try using the PIX debugging tool (this is bundled with the DirectX SDK). This is always my first port of call for drawing discrepancies as it allows you to debug each Draw call separately with access to the depth buffer and transformed vertices.