I'm new to Direct3D and I was on a project taking pictures from a webcam and draw some 3D objects in front of it.
I was able to render webcam images as background using Orthogonal Projection.
//init matrix
D3DXMatrixOrthoLH(&Ortho, frameWidth, frameHeight, 0.0f, 100.0f);
//some code
D3DXVECTOR3 position = D3DXVECTOR3(0.0f, 0.0f, 100.0f);
g_pSprite->Begin(D3DXSPRITE_OBJECTSPACE);
g_pSprite->Draw(g_pTexture,NULL,¢er,&position,0xFFFFFFFF);
g_pSprite->End();
Then I tried to insert a simple triangle in front of it. The Matrices are setup as follow
D3DXMATRIXA16 matWorld;
D3DXMatrixTranslation( &matWorld, 0.0f,0.0f,5.0f );
g_pd3dDevice->SetTransform( D3DTS_WORLD, &matWorld );
D3DXMATRIXA16 matProj;
D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f );
g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
5.0 should be < 100.0 and the triangle is supposed to be appear in front of the images. However it does not appear unless set the z position to 0. At position 0, i can see the triangle but background is blank.
Do you guys have any suggestions?
I would not draw the webcam image in the object space (D3DXSPRITE_OBJECTSPACE) if you intend to use your image solely for background purpose; something like
D3DXVECTOR3 backPos (0.f, 0.f, 0.f);
pBackgroundSprite->Begin(D3DXSPRITE_ALPHABLEND);
pBackgroundSprite->Draw (pBackgroundTexture,
0,
0,
&backPos,
0xFFFFFFFF);
pBackgroundSprite->End();
should hopefully do what you're looking for.
As a quick fix you could disable depth testing as follows;
g_pd3dDevice->SetRenderState(D3DRS_ZENABLE, D3DZB_FALSE);
This way the z-index of the primitives being drawn should reflect the order in which they are drawn.
Also, try using the PIX debugging tool (this is bundled with the DirectX SDK). This is always my first port of call for drawing discrepancies as it allows you to debug each Draw call separately with access to the depth buffer and transformed vertices.
Related
Introduction
Let's say I have the following vertices:
const VERTEX World::vertices[ 4 ] = {
{ -960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1 screen coordinates centred
{ 960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -960.0f, 600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 960.0f, 960.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
You may have guessed that 960 * 2 is 1920.. which is the width of my screen and same goes for 600 * 2 being 1200.
These vertices represent a rectangle that will fill up my ENTIRE screen where the origin is in the centre of my screen.
Issue
So up until now, I have been using an Orthographic view without a projection:
matView = XMMatrixOrthographicLH( Window::width, Window::height, -1.0, 1.0 );
Any matrix that was being sent to the screen was multiplied by matView and it seemed to work great. More specifically, my image; using the above vertices array, fit snugly in my screen and was 1:1 pixels to its original form.
Unfortunately, I need 3D now... and I just realised i'm going to need some projection... so I prepared this little puppy:
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -1.0f, 0 ); // I did try setting z to -100.0f but that didn't work well as I can't scale it back any more... and it's not accurate to the 1:1 pixel I want
XMVECTOR vecCamLookAt = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f );
XMVECTOR vecCamUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
Window::width / Window::height, // aspect ratio
1.0f, // the near view-plane
100.0f ); // the far view-plan
You may already know what the problem is... but if not, I have just set my field of view to 45 degrees.. this'll make a nice perspective and do 3d stuff great, but my vertices array is no longer going to cut the mustard... because the fov and screen aspect have been greatly reduced (or increased ) so the vertices are going to be far too huge for the current view I am looking at (see image)
I was thinking that I need to do some scaling to the output matrix to scale the huge coordinates back down to the respective size my fov and screen aspect is now asking for.
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
Goal
I'm trying to avoid changing every single objects vertices array to a rough scaled prediction of what the original image would look like in world space.
Some extra info
I just wanted to clarify what kind of matrix operations I am currently doing to the world and then how I render using those changes... so this is me doing some translations on my big old background image:
// flipY just turns the textures back around
// worldTranslation is actually the position for the background, so always 0,0,0... as you can see 0.5 was there to make sure I ordered things that were drawn correctly when using orthographic
XMMATRIX worldTranslation = XMMatrixTranslation( 0.0f, 0.0f, 0.5f );
world->constantBuffer.Final = flipY * worldTranslation * matView;
// My thoughts are on this that somehow I need to add a scaling to this matrix...
// but if I add scaling here... it's going to mean every other game object
// (players, enemies, pickups) are going to need to be scaled... and really I just want to
// have to scale the whole lot the once.. right?
And finally this Is where it is drawn to the screen:
// Background
d3dDeviceContext->PSSetShaderResources( 0, 1, world->textures[ 0 ].GetAddressOf( ) ); // Set up texture of background
d3dDeviceContext->IASetVertexBuffers( 0, 1, world->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer (do I need the scaling here?)
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( world->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &world->constantBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( world->indices ), 0, 0 ); // draw
and then what I have done to apply my matProjection which has supersized all my vertices
world->constantBuffer.Final = flipY * worldTranslation * matView * matProjection; // My new lovely projection and view that make everything hugeeeee... I literally see like 1 pixel of the background brah!
Please feel free to take a copy of my game and run it as is (Windows 8 application Visual studio 2013 express project) in the hopes that you can help me out with putting this all into 3D: https://github.com/jimmyt1988/FlyGame/tree/LeanerFramework/Game
its me again. Let me try to clear a few things up
1
Here is a little screenshot from an editor of mine:
I have edited in little black boxes to illustrate something. The axis circles you see around the objects are rendered at exactly the same size. However, they are rendered through a perspective projection. As you can see, the one on the far left is something like twice as large as the one in the center. This is due purely to the nature of a projection like that. If this is unacceptable, you must use a non-perspective projection.
2
The only way it is possible to maintain a 1:1 ratio of screenspace to uv space is to have the object rendered at 1 pixel on screen per 1 pixel on texture. There is nothing more to it than that. However, what you can do is change your texture filter options. Filter options are designs specifically for rendering non 1:1 ratios. For example, the code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Tells opengl: "If you are told to sample a pixel, don't interpolate anything. Just take the nearest value on the texture and paste it on the screen.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
This code, however, does something much better: it interpolates between pixels. I think this may be what you want, if you aren't already doing it.
These pictures (taken from http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/) show this:
Nearest:
Linear:
Here is what it comes down to. You request that:
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
What you are requesting is by definition not possible, so you have to look for alternative solutions. I hope that helps somewhat.
I have used glOrtho(500, 600, 600, 700, -100, 100) projection with this i want to use camera view settings with gluLookAt() method what should be the parameters for gluLookAt function on this projection..
glOrtho builds a matrix that forms the "lens" of your virtual camera. gluLookAt moves that virtual camera.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd368663%28v=vs.85%29.aspx
eyeX/Y/Z are where the camera is.
centerX/Y/Z are the spot at which the camera is looking.
upX/Y/Z is which way up the camera is.
An example use might be:
gluLookAt
(
0.0f, 2.0f, -16.0f,
0.0f, 0.5f, 0.0f,
0.0f, 1.0f, 0.0f
);
This will put the camera 16 units backwards, raise it slightly, point slightly above 0, 0, 0, with the top of the screen pointing along Y+.
You could change the first value to move the camera.
Change the second to change which part of the scene it's pointed at.
Change the third to roll/bank the camera.
The important question, however, is what do you want to do with it?
I'm new to directx and finally I managed to load an image which I want to display as background image/texture
Defining image
void setBGImage(std::string path)
{
D3DXCreateTextureFromFileA(m_Device, path.c_str(), &m_BGImage);
m_BGImageCenter = D3DXVECTOR3(450.0f, 250.0f, 0.0f); // Image is 900x500
}
Drawing image
void DrawBackground()
{
m_Sprite->Begin(D3DXSPRITE_OBJECTSPACE|D3DXSPRITE_DONOTMODIFY_RENDERSTATE);
// Texture tiling
/*
D3DXMATRIX texScaling;
D3DXMatrixScaling(&texScaling, 1.0f, 1.0f, 0.0f);
m_Device->SetTransform(D3DTS_TEXTURE0, &texScaling);*/
//D3DXMATRIX T, S, R;
//D3DXMatrixTranslation(&T, 0.0f, 0.0f, 0.0f);
//D3DXMatrixScaling(&S, 1.0f, 1.0f, 0.0f);
//D3DXMatrixRotationZ(&R, 0.45f);
//m_Sprite->SetTransform(&(S*T));
m_Sprite->Draw(m_BGImage, 0, &m_BGImageCenter, 0, D3DCOLOR_XRGB(255, 255, 255));
m_Sprite->Flush();
m_Sprite->End();
}
onResetDevice (is getting called at startup)
void onResetDevice()
{
m_Sprite->OnResetDevice();
// Sets up the camera 640 units back looking at the origin.
D3DXMATRIX V;
D3DXVECTOR3 pos(0.0f, 0.0f, -640.0f); // This distance is just a test value to get only the image and no white background/border as soon as I have it centered I will adjust it.
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&V, &pos, &target, &up);
m_Device->SetTransform(D3DTS_VIEW, &V);
// The following code defines the volume of space the camera sees.
D3DXMATRIX P;
RECT R;
GetClientRect(m_hWnd, &R);
float width = (float)R.right;
float height = (float)R.bottom;
D3DXMatrixPerspectiveFovLH(&P, D3DX_PI*0.25f, width/height, 0.0f, 1.0f);
m_Device->SetTransform(D3DTS_PROJECTION, &P);
// This code sets texture filters, which helps to smooth out disortions when you scale a texture.
m_Device->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
m_Device->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);
m_Device->SetSamplerState(0, D3DSAMP_MIPFILTER, D3DTEXF_LINEAR);
// This line of code disables Direct3D lighting.
m_Device->SetRenderState(D3DRS_LIGHTING, false);
// The following code specifies an alpha test and reference value.
m_Device->SetRenderState(D3DRS_ALPHAREF, 10);
m_Device->SetRenderState(D3DRS_ALPHAFUNC, D3DCMP_GREATER);
// The following code is used to setup alpha blending.
m_Device->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
m_Device->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1);
m_Device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
m_Device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
// Indicates that we are using 2D texture coordinates.
m_Device->SetTextureStageState(0, D3DTSS_TEXTURETRANSFORMFLAGS, D3DTTFF_COUNT2);
}
When I use these methods to render my background image it get's displayed upside down, not centered (little bit to the right) and I've got the feeling the height width ratio isn't correct (image is kinda blurry and it feels like the image is not as high as it's supposed to be).
What did I try?
I tried to adjust various coordinates in order to look at the image from the other side and then rotate it but whatever I tried it turned into a white/non-existing background.
Centering the image is not such a big deal I can just move the camera position but I'm confused since the m_BGImageCenter was supposed to do that for me in the Draw method and it seems to not work 100% correct.
My Questions (I guess it's ok to ask multiple questions in this context):
How can I get the image not upside down (I know how you would do it in theory but I somehow don't get it right, so please give me the coordinates).
Why is the image not centered?
Is it possible that D3DXMatrixPerspectiveFovLH is wrapping my image since it looks a little bit blurry.
This has been asked many, many times before and I've read loads of posts and forums on the internet about it, but I just can't get one object to rotate around it's own axis.
I have several objects drawn like this:
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
.....
gl.glPushMatrix();
gl.glRotatef(angle, 0.0f, 1.0f, 0.0f);
gl.glTranslatef(0.0f, 0.0f, 0.0f);
texture.bind(gl);
gl.glTexCoordPointer(2, GL2.GL_FLOAT, 0, textureRLt);
gl.glNormalPointer(GL2.GL_FLOAT, 0, normalRLt);
gl.glVertexPointer(3, GL2.GL_FLOAT, 0, vertexRLt);
gl.glDrawElements(GL2.GL_TRIANGLES, countI, GL2.GL_UNSIGNED_SHORT, indexRLt);
gl.glPopMatrix();
And this is drawn correctly, with all textures and normals applied..
I know that OpenGL executes commands in reverse, so that's why glRotatef is first. Also I know that all rotations are around the origin, so I need to translate the object to that origin (not that I think I have to, because "the pen" is already at the origin because I save the matrix before drawing every object and pop it afterwards). Is it something with glDrawElements? Something doesn't seem right.
Any help will be greatly appreciated. :)
Edit: I can see how the objects rotate around the main x axis, but I want them to rotate around their local x-axis.
"OpenGL executes commands reversely", means it multiplies the transformation matrix from right rather than left. What does this mean?
Imagine transformation A and B:
y = Ax
transforms x by A and yields y
This is equivalent to:
// sudo code:
glA()
glDraw(x)
Now, usually, in programming you think you get the transformations in order that you write them. So, you think that
glA()
glB()
glDraw(x)
would give you
y = BAx
but that is wrong. You actually get:
y = ABx
This means that, first B is applied to x and then A to the result.
Put in english, take a look at this example:
glScalef(...) // third, scale the whole thing
glTranslatef(...) // second, translate it somewhere
glRotatef(...) // first, rotate the object (or course,
// around its own axes, because the object is at origin)
glDrawElements(...) // draw everything at origin
So, what you need to do is to write:
// When everything is drawn, move them to destination:
gl.glTranslatef(destination[0], destination[1], destination[2]);
// For every object, rotate it the way it should, then restore transformation matrix
// object: RLt
gl.glPushMatrix();
gl.glRotatef(angle, 0.0f, 1.0f, 0.0f);
texture.bind(gl);
gl.glTexCoordPointer(2, GL2.GL_FLOAT, 0, textureRLt);
gl.glNormalPointer(GL2.GL_FLOAT, 0, normalRLt);
gl.glVertexPointer(3, GL2.GL_FLOAT, 0, vertexRLt);
gl.glDrawElements(GL2.GL_TRIANGLES, countI, GL2.GL_UNSIGNED_SHORT, indexRLt);
gl.glPopMatrix();
// object: RLt2
gl.glPushMatrix();
gl.glRotatef(angle2, 0.0f, 1.0f, 0.0f);
texture.bind(gl);
gl.glTexCoordPointer(2, GL2.GL_FLOAT, 0, textureRLt2);
gl.glNormalPointer(GL2.GL_FLOAT, 0, normalRLt2);
gl.glVertexPointer(3, GL2.GL_FLOAT, 0, vertexRLt2);
gl.glDrawElements(GL2.GL_TRIANGLES, countI2, GL2.GL_UNSIGNED_SHORT, indexRLt2);
gl.glPopMatrix();
I am not sure, whether you are updating 'angle' variable periodically. If you are not, then there won't be any rotation. The pseudo code is below. You can use the gluttimerfunc for periodic update of the variable.
for (every opengl loop)
angle+=5.0f
Satish
Say I use glRotate to translate the current view based on some arbitrary user input (i.e, if key left is pressed then rtri+=2.5f)
glRotatef(rtri,0.0f,1.0f,0.0f);
Then I draw the triangle in the rotated position:
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd(); // Finished Drawing The Triangle
How do I get the resulting translated vertexes for use in collision detection? Or will I have to manually apply the transform myself and thus doubling up the work?
The reason I ask is that I wouldn't mind implementing display lists.
The objects you use for collision detection are usually not the objects you use for display. They are usually simpler and faster.
So yes, the way to do it is to maintain the transformation you're using manually but you wouldn't be doubling up the work because the objects are different.
Your game loop should look like (with c++ syntax) :
void Scene::Draw()
{
this->setClearColor(0.0f, 0.0f, 0.0f);
for(std::vector<GameObject*>::iterator it = this->begin(); it != this->end(); ++it)
{
this->updateColliders(it);
glPushMatrix();
glRotatef(it->rotation.angle, it->rotation.x, it->rotation.y, it->rotation.z);
glTranslatef(it->position.x, it->position.y, it->position.z);
glScalef(it->scale.x, it->scale.y, it->scale.z);
it->Draw();
glPopMatrix();
}
this->runNextFrame(this->Draw, Scene::MAX_FPS);
}
So, for instance, if i use a basic box collider with a cube the draw method will :
Fill the screen with a black color (rgb : (0,0,0))
For each object
Compute the collisions with position and size informations
Save the actual ModelView matrix state
Transform the ModelView matrix (rotate, translate, scale)
Draw the cube
Restore the ModelView matrix state
Check the FPS and run the next frame at the right time
** The class Scene inherits from the vector class
I hope it will help ! :)