i am new to Direct 2D and a little lost.
I am trying to rotate an object but the way i am doing, it is rotating the whole screen of game rather than just one object. I am clueless on how to approach about rotating just specific object. I hope i have provided enough information, but if not please let me know. And i'm sorry if it is stupidly simple question.
Any help will be much appreciated.
void SpriteSheet::Draw(bool rotate)
{
ID2D1Effect *chromakeyEffect = NULL;
ID2D1Effect *scaleit = NULL;
ID2D1Effect * rotation = NULL;
D2D1_POINT_2F ptss = { 50, 50 };
D2D1_VECTOR_3F vec{ -5.0f, -500.0f, 100.0f };
D2D1_VECTOR_3F vector{ 0.0f, 1.0f, 0.0f };
// this is the rotating part
if (rotate)
{
//D2D1::Matrix3x2F::Identity()._11;
gfx->GetRenderTarget()->SetTransform(
D2D1::Matrix3x2F::Rotation(20, D2D1::Point2F(100, 100)));
}
gfx->GetDeviceContext()->CreateEffect(CLSID_D2D1ChromaKey, &chromakeyEffect);
gfx->GetDeviceContext()->CreateEffect(CLSID_D2D1Scale, &scaleit);
gfx->GetDeviceContext()->CreateEffect(CLSID_D2D12DAffineTransform, &rotation);
// applying chroma key
chromakeyEffect->SetInput(0, bmp);
chromakeyEffect->SetValue(D2D1_CHROMAKEY_PROP_COLOR, vector);
chromakeyEffect->SetValue(D2D1_CHROMAKEY_PROP_TOLERANCE, 0.8f);
chromakeyEffect->SetValue(D2D1_CHROMAKEY_PROP_INVERT_ALPHA, false);
chromakeyEffect->SetValue(D2D1_CHROMAKEY_PROP_FEATHER, false);
// scale the object
scaleit->SetInputEffect(0, chromakeyEffect);
//scaleit->SetValue(D2D1_SCALE_PROP_BORDER_MODE, vec);
//scaleit->SetValue(D2D1_SCALE_PROP_CENTER_POINT, D2D1::Vector2F(1.0f, 1.0f));
scaleit->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(0.50f, 0.50f));
// draw it on screen
gfx->GetDeviceContext()->DrawImage(scaleit, ptss);
if (chromakeyEffect) chromakeyEffect->Release();
}
Just came across Rotation matrix, and that solved the issue. I'm taking the scale effect and applying rotation to it.
if(rotate)
{
rotation->SetInputEffect(0, scaleit);
D2D1_MATRIX_3X2_F matrix = D2D1::Matrix3x2F::Rotation(98, D2D1::Point2F(50, 40));
rotation->SetValue(D2D1_2DAFFINETRANSFORM_PROP_TRANSFORM_MATRIX, matrix);
}
Related
I'm trying to implement an editor like placing mod with opengl.
When you click somewhere on the screen an object get placed in that position.
So far this is my code
void RayCastMouse(double posx, double posy)
{
glm::fvec2 NDCCoords = glm::fvec2( (2.0*posx)/ float(SCR_WIDTH)-1.f, ((-2.0*posy) / float(SCR_HEIGHT)+1.f) );
glm::mat4 viewProjectionInverse = glm::inverse(projection * camera.GetViewMatrix());
glm::vec4 worldSpacePosition(NDCCoords.x, NDCCoords.y, 0.0f, 1.0f);
glm::vec4 worldRay = viewProjectionInverse*worldSpacePosition;
printf("X=%f / y=%f / z=%f\n", worldRay.x, worldRay.y, worldRay.z);
m_Boxes.emplace_back(worldRay.x, 0, worldRay.z);
}
The problem is the object isn't placed at the correct position, worldRay vec3 is used to translate the model matrix.
Can anyone one please help with this i will really appreciate it.
By the way division the worldRay xyz component by worldRay.w is setting the object at the camera position.
First time poster on this site. But I have hit a serious block and am lost. If this is too much to read, the question is at the bottom. But I thought the background would help.
A little background on the project:
I currently have a rendering engine set up using DirectX10. It seems that after implementing my design of component(data) pointers inside of a component manager(holds pointers to components of entities inside of world) and sending that data to an interface manager (all methods for loading, creation, updating, rendering, etc.).
Here is an image of a class diagram to make it easier to visualize:
edited: (Do not have enough rep to post, so here is a link instead: http://imgur.com/L3nOyoY
edit:
My rendering engine has a weird issue of rending a cube (parsed from a Wavefront .obj file and put into an array of custom vertices) for only one frame. After the initial presentation of the back buffer, my cube disappears.
I have gone through a lot of debugging to fix this issue, but have yielded no answers. I have files of dumps of the change in vectors for position, scale, rotation, etc. as well as the world matrix. All data in regards to the position of the object in world space and the camera position in world space maintains its integrity. I have also checked to see if there were issues with other pointers either being corrupted, being modified where they ought not to be, or deleted by accident.
Stepping through the locals over several updates and renders I find no change in Movement Components,
no change in Texture components, and no change to my shader variables and pointers.
Here is the code running initially, and then the loop,
void GameWorld::Load()
{
//initialize all members here
mInterface = InterfaceManager(pDXManager);
//create entities here
mEHouse = CEntity(
CT::MESH | CT::MOVEMENT | CT::SHADER | CT::TEXTURE,
"House", &mCManager);
mECamera = CEntity(
CT::CAMERA | CT::MOVEMENT | CT::LIGHT,
"Camera", &mCManager);
//HACKS FOR TESTING ONLY
//Ideally create script to parse to create entities;
//GameWorld will have dynamic entity list;
//////////////////////////////////////////////////
tm = CMesh("../Models/Box.obj");
mCManager.mMesh[0] = &tm;
//hmmm.... how to make non-RDMS style entities...
tc = CCamera(
XMFLOAT3(0.0f, 0.0f, 1.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 1.0f, 0.0f), 1);
mCManager.mCamera[0] = &tc;
tmc = CMovement(
XMFLOAT3(0.0f, 0.0f, -10.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f));
mCManager.mMovement[1] = &tmc;
////////////////////////////////////////////////////
//only after all entities are created
mInterface.onLoad(&mCManager);
}
//core game logic goes here
void GameWorld::Update(float dt)
{
mInterface.Update(dt, &mCManager);
}
//core rendering logic goes here
void GameWorld::Render()
{
pDXManager->BeginScene();
//render calls go here
mInterface.Render(&mCManager);
//disappears after end scene
pDXManager->EndScene();
}
And here is the interface render and update methods:
void InterfaceManager::onLoad(CComponentManager* pCManager)
{
//create all
for(int i = 0; i < pCManager->mTexture.size(); ++i)
{
mTexture2D.loadTextureFromFile(pDXManager->mD3DDevice, pCManager->mTexture[i]);
}
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
mMesh.loadMeshFromOBJ(pCManager->mMesh[i]);
mMesh.createMesh(pDXManager->mD3DDevice, pCManager->mMesh[i]);
}
for(int i = 0; i < pCManager->mShader.size(); ++i)
{
mShader.Init(pDXManager->mD3DDevice, pDXManager->mhWnd, pCManager->mShader[i], pCManager->mTexture[i]);
}
//TODO: put this somewhere else to maintain structure
XMMATRIX pFOVLH = XMMatrixPerspectiveFovLH((float)D3DX_PI / 4.0f, (float)pDXManager->mWindowWidth/pDXManager->mWindowHeight, 0.1f, 1000.0f);
XMStoreFloat4x4(&pCManager->mCamera[0]->mProjectionMat, pFOVLH);
}
void InterfaceManager::Update(float dt, CComponentManager* pCManager)
{
//update input
//update ai
//update collision detection
//update physics
//update movement
for(int i = 0; i < pCManager->mMovement.size(); ++i)
{
mMovement.transformToWorld(pCManager->mMovement[i]);
}
//update animations
//update camera
//There is only ever one active camera
//TODO: somehow set up for an activecamera variable
mCamera.Update(pCManager->mCamera[0], pCManager->mMovement[pCManager->mCamera[0]->mOwnerID]);
}
void InterfaceManager::Render(CComponentManager* pCManager)
{
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
//render meshes
mMesh.RenderMeshes(pDXManager->mD3DDevice, pCManager->mMesh[i]);
//set shader variables
mShader.setShaderMatrices(pCManager->mCamera[0], pCManager->mShader[i], pCManager->mMovement[i]);
mShader.setShaderLight(pCManager->mLight[i], pCManager->mShader[i]);
mShader.setShaderTexture(pCManager->mTexture[i]);
//render shader
mShader.RenderShader(pDXManager->mD3DDevice, pCManager->mShader[i], pCManager->mMesh[i]);
}
}
In short, my question could be this: Why is my cube only rendering for one frame, then disappearing?
UPDATE: I found the method causing the issues by isolating it. It lays within my update() method, before render(). It is when my camera is updated that it causes issues. Here is the code for that method, perhaps someone can see what I am not?
void ICamera::Update(CCamera* pCamera, CMovement* pMovement)
{
XMMATRIX rotMat = XMMatrixRotationRollPitchYaw(pMovement->mRotation.x,
pMovement->mRotation.y,
pMovement->mRotation.z);
XMMATRIX view = XMLoadFloat4x4(&pCamera->mViewMat);
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mEye);
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
lookAt = XMVector3TransformCoord(lookAt, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + lookAt;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, lookAt);
XMStoreFloat3(&pCamera->mUp, up);
XMStoreFloat4x4(&pCamera->mViewMat, view);
}
From the camera update code, one obvious issue is the lookAt variable.
Usually lookAt variable is a "point" not a "vector" (direction), but from your code, it seems you saved it as a point, but used it as a vector. And your camera definition is also not complete, a standard camera should at least contains: position, up direction and view direction (or lookAt point).
I assume you want to rotate and translate your camera in ICamera::Update function, so you rotate up and view direction, and translate the position.
I guess your CMovement will give you a new camera position and apply a rotation to the camera. Then, you can try to modify as below, (mEye and mLookAt is position, mUp is the direction)
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mLookAt);
XMVECTOR oldPos = XMLoadFloat3(&pCamera->mEye);
XMVECTOR viewDir = lookAt - oldPos;
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
viewDir = XMVector3TransformCoord(viewDir, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + viewDir;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, position);
XMStoreFloat3(&pCamera->mLookAt, lookAt);
XMStoreFloat3(&pCamera->mUp, up);
I have looked at a ton of quaternion examples on several sites including this one, but found none that answer this, so here goes...
I want to orbit my camera around a large sphere, and have the camera always point at the center of the sphere. To make things easy, the sphere is located at {0,0,0} in the world. I am using a quaternion for camera orientation, and a vector for camera position. The problem is that the camera position orbits the sphere perfectly as it should, but always looks one constant direction straight forward instead of adjusting to point to the center as it orbits.
This must be something simple, but I am new to quaternions... what am I missing?
I'm using C++, DirectX9. Here is my code:
// Note: g_camRotAngleYawDir and g_camRotAnglePitchDir are set to either 1.0f, -1.0f according to keypresses, otherwise equal 0.0f
// Also, g_camOrientationQuat is just an identity quat to begin with.
float theta = 0.05f;
D3DXQUATERNION g_deltaQuat( 0.0f, 0.0f, 0.0f, 1.0f );
D3DXQuaternionRotationYawPitchRoll(&g_deltaQuat, g_camRotAngleYawDir * theta, g_camRotAnglePitchDir * theta, g_camRotAngleRollDir * theta);
D3DXQUATERNION tempQuat = g_camOrientationQuat;
D3DXQuaternionMultiply(&g_camOrientationQuat, &tempQuat, &g_deltaQuat);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
[EDIT - Feb 13, 2012]
Ok, here's my understanding so far:
move the camera using an angular delta each frame.
Get a vector from center to camera-pos.
Call quaternionRotationBetweenVectors with a z-facing unit vector and the target vector.
Then use the result of that function for the orientation of the view matrix, and the camera-position goes in the translation portion of the view matrix.
Here's the new code (called every frame)...
// This part orbits the position around the sphere according to deltas for yaw, pitch, roll
D3DXQuaternionRotationYawPitchRoll(&deltaQuat, yawDelta, pitchDelta, rollDelta);
D3DXMatrixRotationQuaternion(&mat1, &deltaQuat);
D3DXVec3Transform(&g_camPosition, &g_camPosition, &mat1);
// This part adjusts the orientation of the camera to point at the center of the sphere
dir1 = normalize(vec3(0.0f, 0.0f, 0.0f) - g_camPosition);
QuaternionRotationBetweenVectors(&g_camOrientationQuat, vec3(0.0f, 0.0f, 1.0f), &dir1);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
...I tried that solution out, without success. What am I doing wrong?
It should be as easy as doing the following whenever you update the position (assuming the camera is pointing along the z-axis):
direction = normalize(center - position)
orientation = quaternionRotationBetweenVectors(vec3(0,0,1), direction)
It is fairly easy to find examples of how to implement quaternionRotationBetweenVectors. You could start with the question "Finding quaternion representing the rotation from one vector to another".
Here's an untested sketch of an implementation using the DirectX9 API:
D3DXQUATERNION* quaternionRotationBetweenVectors(__inout D3DXQUATERNION* result, __in const D3DXVECTOR3* v1, __in const D3DXVECTOR3* v2)
{
D3DXVECTOR3 axis;
D3DXVec3Cross(&axis, v1, v2);
D3DXVec3Normalize(&axis, &axis);
float angle = acos(D3DXVec3Dot(v1, v2));
D3DXQuaternionRotationAxis(result, &axis, angle);
return result;
}
This implementation expects v1, v2 to be normalized.
You need to show how you calculate the angles. Is there a reason why you aren't using D3DXMatrixLookAtLH?
I tried to draw a line between two vertices with D3D11. I have some experiences in D3D9 and D3D11, but it seems to be a problem in D3D11 to draw a line, which starts in one given pixel and ends in an other.
What I did:
I added 0.5f to the pixel coordinates of each vertex to fit the texel-/pixel coordinate system (I read the Microsoft pages to the differeces between D3D9 and D3D11 coordinate systems):
f32 fOff = 0.5f;
ColoredVertex newVertices[2] =
{
{ D3DXVECTOR3(fStartX + fOff, fStartY + fOff,0), vecColorRGB },
{ D3DXVECTOR3(fEndX + fOff, fEndY + fOff,0), vecColorRGB }
};
Generated a ortho projection matrix to fit the render target:
D3DXMatrixOrthoOffCenterLH(&MatrixOrthoProj,0.0f,(f32)uRTWidth,0.0f,(f32)uRTHeight,0.0f,1.0f);
D3DXMatrixTranspose(&cbConstant.m_matOrthoProjection,&MatrixOrthoProj);
Set RasterizerState, BlendState, Viewport, ...
Draw Vertices as D3D11_PRIMITIVE_TOPOLOGY_LINELIST
Problem:
The Line seems to be one pixel to short. It starts in the given pixel coordinate an fits it perfect. The direction of the line looks correct, but the pixel where I want the line to end is still not colored. It looks like the line is just one pixel to short...
Is the any tutorial explaining this problem or does anybody have the same problem? As I remember it wasn't as difficult in D3D9.
Please ask if you need further information.
Thanks, Stefan
EDIT: found the rasterization rules for d3d10 (should be the same for d3d11):
http://msdn.microsoft.com/en-us/library/cc627092%28v=vs.85%29.aspx#Line_1
I hope this will help me understanding...
According to the rasterisation rules (link in the question above) I might have found a solution that should work:
sort the vertices StartX < EndX and StartY < EndY
add (0.5/0.5) to the start vertex (as i did before) to move the vertex to the center of the pixel
add (1.0/1.0) to the end vertex to move the vertex to the lower right corner
This is needed to tell the rasterizer that the last pixel of the line should be drawn.
f32 fXStartOff = 0.5f;
f32 fYStartOff = 0.5f;
f32 fXEndOff = 1.0f;
f32 fYEndOff = 1.0f;
ColoredVertex newVertices[2] =
{
{ D3DXVECTOR3((f32)fStartX + fXStartOff, (f32)fStartY + fYStartOff,0), vecColorRGB },
{ D3DXVECTOR3((f32)fEndX + fXEndOff , (f32)fEndY + fYEndOff,0), vecColorRGB }
};
If you know a better solution, please let me know.
I don't know D3D11, but your problem sounds a lot like the D3DRS_LASTPIXEL render state from D3D9 - maybe there's an equal for D3D11 you need to look into.
I encountered the exact same issue, and i fixed thank to this discussion.
My vertices are stored into a D3D11_PRIMITIVE_TOPOLOGY_LINELIST vertex buffer.
Thank for this usefull post, you made me fix this bug today.
It was REALLY trickier than i thought at start.
Here a few line of my code.
// projection matrix code
float width = 1024.0f;
float height = 768.0f;
DirectX::XMMATRIX offsetedProj = DirectX::XMMatrixOrthographicRH(width, height, 0.0f, 10.0f);
DirectX::XMMATRIX proj = DirectX::XMMatrixMultiply(DirectX::XMMatrixTranslation(- width / 2, height / 2, 0), offsetedProj);
// view matrix code
// screen top left pixel is 0,0 and bottom right is 1023,767
DirectX::XMMATRIX viewMirrored = DirectX::XMMatrixLookAtRH(eye, at, up);
DirectX::XMMATRIX mirrorYZ = DirectX::XMMatrixScaling(1.0f, -1.0f, -1.0f);
DirectX::XMMATRIX view = DirectX::XMMatrixMultiply(mirrorYZ, viewMirrored);
// draw line code in my visual debug tool.
void TVisualDebug::DrawLine2D(int2 const& parStart,
int2 const& parEnd,
TColor parColorStart,
TColor parColorEnd,
float parDepth)
{
FLine2DsDirty = true;
// D3D11_PRIMITIVE_TOPOLOGY_LINELIST
float2 const startFloat(parStart.x() + 0.5f, parStart.y() + 0.5f);
float2 const endFloat(parEnd.x() + 0.5f, parEnd.y() + 0.5f);
float2 const diff = endFloat - startFloat;
// return normalized difference or float2(1.0f, 1.0f) if distance between the points is null. Then multiplies the result by something a little bigger than 0.5f, 0.5f is not enough.
float2 const diffNormalized = diff.normalized_replace_if_null(float2(1.0f, 1.0f)) * 0.501f;
size_t const currentIndex = FLine2Ds.size();
FLine2Ds.resize(currentIndex + 2);
render::vertex::TVertexColor* baseAddress = FLine2Ds.data() + currentIndex;
render::vertex::TVertexColor& v0 = baseAddress[0];
render::vertex::TVertexColor& v1 = baseAddress[1];
v0.FPosition = float3(startFloat.x(), startFloat.y(), parDepth);
v0.FColor = parColorStart;
v1.FPosition = float3(endFloat.x() + diffNormalized.x(), endFloat.y() + diffNormalized.y(), parDepth);
v1.FColor = parColorEnd;
}
I tested Several DrawLine2D calls, and it seems to work well.
I am having trouble with a camera class I am trying to use in my program. When I change the camera_target of the gluLookAt call, my whole terrain is rotating instead of just the camera rotating like it should.
Here is some code from my render method:
camera->Place();
ofSetColor(255, 255, 255, 255);
//draw axis lines
//x-axis
glBegin(GL_LINES);
glColor3f(1.0f,0.0f,0.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(100.0f, 0.0f,0.0f);
glEnd();
//y-axis
glBegin(GL_LINES);
glColor3f(0.0f,1.0f,0.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(0.0f, 100.0f,0.0f);
glEnd();
//z-axis
glBegin(GL_LINES);
glColor3f(0.0f,0.0f,1.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(0.0f, 0.0f,100.0f);
glEnd();
glColor3f(1,1,1);
terrain->Draw();
And the rotate and place methods from my camera class:
void Camera::RotateCamera(float h, float v){
hRadians += h;
vRadians += v;
cam_target.y = cam_position.y+(float)(radius*sin(vRadians));
cam_target.x = cam_position.x+(float)(radius*cos(vRadians)*cos(hRadians));
cam_target.z = cam_position.z+(float)(radius*cos(vRadians)*sin(hRadians));
cam_up.x = cam_position.x-cam_target.x;
cam_up.y = ABS(cam_position.y+(float)(radius*sin(vRadians+PI/2))) ;
cam_up.z = cam_position.z-cam_target.z;
}
void Camera::Place() {
//position, camera target, up vector
gluLookAt(cam_position.x, cam_position.y, cam_position.z, cam_target.x, cam_target.y, cam_target.z, cam_up.x, cam_up.y, cam_up.z);
}
The problem is that the whole terrain is moving around the camera, whereas the camera should just be rotating.
Thanks for any help!
EDIT - Found some great tutorials and taking into account the answers on here, I make a better camera class. Thanks guys
From the POV of the terrain, yes, the camera is rotating. But, since your view is from the POV of the camera, when you rotate the camera, it appears that the terrain is rotating. This is the behavior that gluLookAt() is intended to produce. If there is something else that you expected, you will need to rotate only the geometry that you want rotated, and not try to rotate using gluLookAt().
Update 1: Based on the discussion below, try this:
void Camera::RotateCamera(float h, float v)
{
hRadians += h;
vRadians += v;
cam_norm.x = cos(vRadians) * sin(hRadians);
cam_norm.y = -sin(vRadians);
cam_norm.z = cos(vRadians) * sin(hRadians);
cam_up.x = sin(vRadians) * sin(hRadians);
cam_up.y = cos(vRadians);
cam_up.z = sin(vRadians) * cos(hRadians);
}
void Camera::Place()
{
//position, camera target, up vector
gluLookAt(cam_pos.x, cam_pos.y, cam_pos.z,
cam_pos.x+cam_norm.x, cam+pos.y+cam_norm.y, camp_pos.z+cam_norm.z,
cam_up.x, cam_up.y, cam_up.z);
}
Separating the camera normal (the direction the camera is looking) from the position allows you to independently change the position, pan and tilt... that is, you can change the position without having to recompute the normal and up vectors.
Disclaimer: This is untested, just what I could do on the back of a sheet of paper. It assumes a right handed coordinate system and that pan rotation is applied before tilt.
Update 2: Derived using linear algebra rather than geometry... again, untested, but I have more confidence in this.
Well, rotating the camera or orbiting the terrain around the camera looks essentially the same.
If you want to orbit the camera around a fixed terrain point you have to modify the camera position, not the target.
Should it not be?:
cam_up.x = cam_target.x - cam_position.x;
cam_up.y = ABS(cam_position.y+(float)(radius*sin(vRadians+PI/2))) ;
cam_up.z = cam_target.z - cam_position.z;
Perhaps you should normalize cam_up as well.
HTH