OpenGL applying glMaterial to only one object - opengl

With the code below I am trying to apply lighting to the sun. When I apply GL_EMISSION it seems to apply the lighting to the other objects that are created too.
GL11.glPushMatrix();
{
// how shiny are the front faces of the sun (specular exponent)
float sunFrontShininess = 2.0f;
// specular reflection of the front faces of the sun
float sunFrontSpecular[] = {0.1f, 0.1f, 0.0f, 1.0f};
// diffuse reflection of the front faces of the sun
float sunFrontDiffuse[] = {1.0f, 1.0f, 1.0f, 1.0f};
float sunEmission[] = {0.2f, 0.2f, 0.2f, 1f};
// set the material properties for the sun using OpenGL
//???????????????????????????????????????????????????????????????????????????????????????
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_EMISSION, FloatBuffer.wrap(sunEmission));
//???????????????????????????????????????????????????????????????????????????????????????
GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS, sunFrontShininess);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_SPECULAR, FloatBuffer.wrap(sunFrontSpecular));
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, FloatBuffer.wrap(sunFrontDiffuse));
// position and draw the sun using a sphere quadric object
GL11.glTranslatef(4.0f, 7.0f, -19.0f);
new Sphere().draw(0.5f,10,10);
}
GL11.glPopMatrix();

Related

Why does my vector not rotate correctly OpenGL/GLM?

I am trying to learn how to do some transformations on 3d points in OpenGL. Using this cheat sheet I believe that I have the correct matrix to multiply to my vector which I want to rotate. However, when I multiply and print the new coord, I believe that it is incorrect. (Rotating 1,0,0 90deg cc should result in 0,1,0 correct?) Why is this not working?
My code:
glm::vec4 vec(1.0f, 0.0f, 0.0f, 1.0f);
glm::mat4 trans = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, cos(glm::radians(90.0f)), -sin(glm::radians(90.0f)), 0.0f,
0.0f, sin(glm::radians(90.0f)), cos(glm::radians(90.0f)), 0,
0.0f, 0.0f, 0.0f, 1.0f
};
vec = trans * vec; //I should get 0.0, 1.0, 0.0 right?
std::cout << vec.x << ", " << vec.y << ", " << vec.z << std::endl;
The above prints 1.0, 0.0, 0.0 indicating that there was no change at all?
I also tried using the rotate function in GLM to generate my matrix rather then manually specifying but I still did not get what I think should be correct (I got a different wrong answer).
glm::mat4 trans = glm::rotate(trans, 90.0f, glm::vec3(0.0, 0.0, 1.0)); //EDIT: my bad, should've been z axis not x, still not working
The above prints: -2.14..e+08, -2.14..e+08, -2.14..e+08
(PS: I just took Geometry in the previous school year, my apologies if the math is incorrect. I have a basic understanding of matrices and matrix multiplication that I picked up today to learn OpenGL transformations but other then that I'm sort of a noob at this)
In your code, you're rotating a unit vector on the x-axis
around the x-axis and that doesn't change the vector (imagine
rotating a pencil around itself, the direction doesn't change at all).
To achieve what you previously wanted, you should rotate the
vector around the z-axis using the matrix like this:
glm::mat4 trans = {
cos(glm::radians(90.0f)), -sin(glm::radians(90.0f)), 0.0f, 0.0f,
sin(glm::radians(90.0f)), cos(glm::radians(90.0f)), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
Besides that, glm::mat4 trans = glm::rotate(trans, 90.0f, glm::vec3(0.0, 0.0, 1.0)); isn't returning the desired result because trans need to be initialized before passing it to glm::rotate. Try writing it like this:
glm::mat4 trans; // Default constructor called, trans is an identity matrix
trans = glm::rotate(trans, glm::radians(90.0f), glm::vec3(0.0, 0.0, 1.0));
Still, you might not get the expected vector (0.0f, 1.0f, 0.0f) due to precision loss when using glm::radians and cos/sen. In my test, I got the vector (-4.37113883e-008f, 1.0f, 0.0f) and -4.37113883e-008 is really -0.0000000437113883 which is a number very close to 0, the expected result.
The reason why your own rotation matrix is not changing the input is simple: your rotation only affects y and z coordinates and since those are zero the result is exactly the same as the input. X coordinate has a multiplier of 1 into the output x coordinate so that stays the same.
You can make the vector 1.0, 2.0, 3.0, 1.0 for example and then you will see changes.
As for the glm version, can't say why it would give strange result, I have never had issues with them but haven't used much.

Unexpected result when unprojecting screen coordinates in DirectX

In order to be able to determine whether the user clicked on any of my 3D objects I’m trying to turn the screen coordinates of the click into a vector which I then use to check whether any of my triangles got hit. To do so I’m using the XMVector3Unproject method provided by DirectX and I’m implementing everything in C++/CX.
The problem that I’m facing is that the vector that results from unprojecting the screen coordinates is not at all as I expect it to be. The below image illustrates this:
The cursor position at the time that the click occurs (highlighted in yellow) is visible in the isometric view on the left. As soon as I click, the vector resulting from unprojecting appears behind the model indicated in the images as the white line penetrating the model. So instead of originating at the cursor location and going into the screen in the isometric view it is appearing at a completely different position.
When I move the mouse in the isometric view horizontally while clicking and after that moving the mouse vertically and clicking the below pattern appears. All lines in the two images represent vectors resulting from clicking. The model has been removed for better visibility.
So as can be seen from the above image all vectors seem to originate from the same location. If I change the view and repeat the process the same pattern appears but with a different origin of the vectors.
Here are the code snippets that I use to come up with this. First of all I receive the cursor position using the below code and pass it to my “SelectObject” method together with the width and height of the drawing area:
void Demo::OnPointerPressed(Object^ sender, PointerEventArgs^ e)
{
Point currentPosition = e->CurrentPoint->Position;
if(m_model->SelectObject(currentPosition.X, currentPosition.Y, m_renderTargetWidth, m_renderTargetHeight))
{
m_RefreshImage = true;
}
}
The “SelectObject” method looks as follows:
bool Model::SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
XMMATRIX viewMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
XMMATRIX modelMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);
XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
// Code to retrieve v0, v1 and v2 is omitted
if(Intersects(rayOrigin, XMVector3Normalize(v - rayOrigin), v0, v1, v2, depth))
{
return true;
}
}
Eventually the calculated vector is used by the Intersects method of the DirectX::TriangleTests namespace to detect if a triangle got hit. I’ve omitted the code in the above snipped because it is not relevant for this problem.
To render these images I use an orthographic projection matrix and a camera that can be rotated around both its local x- and y-axis which generates the view matrix. The world matrix always stays the same, i.e. it is simply an identity matrix.
The view matrix is calculated as follows (based on the example in Frank Luna’s book 3D Game Programming):
void Camera::SetViewMatrix()
{
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis , m_cameraXAxis);
XMStoreFloat3(&cameraYAxis , m_cameraYAxis);
XMStoreFloat3(&cameraZAxis , m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(1, 0) = cameraXAxis.y;
viewMatrix(2, 0) = cameraXAxis.z;
viewMatrix(3, 0) = x;
viewMatrix(0, 1) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(2, 1) = cameraYAxis.z;
viewMatrix(3, 1) = y;
viewMatrix(0, 2) = cameraZAxis.x;
viewMatrix(1, 2) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(3, 2) = z;
viewMatrix(0, 3) = 0.0f;
viewMatrix(1, 3) = 0.0f;
viewMatrix(2, 3) = 0.0f;
viewMatrix(3, 3) = 1.0f;
m_modelViewProjectionConstantBufferData->view = viewMatrix;
}
It is influenced by two methods which rotate the camera around the x-and y-axis of the camera:
void Camera::ChangeCameraPitch(float angle)
{
XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraXAxis, angle);
m_cameraYAxis = XMVector3TransformNormal(m_cameraYAxis, rotationMatrix);
m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}
void Camera::ChangeCameraYaw(float angle)
{
XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraYAxis, angle);
m_cameraXAxis = XMVector3TransformNormal(m_cameraXAxis, rotationMatrix);
m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}
The world / model matrix and the projection matrix are calculated as follows:
void Model::SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->projection, XMMatrixTranspose(orthographicProjectionMatrix * orientationMatrix));
}
void Model::SetModelMatrix()
{
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->model, XMMatrixTranspose(orientationMatrix));
}
Frankly speaking I do not yet understand the problem that I’m facing. I’d be grateful if anyone with a deeper insight could give me some hints as to where I need to apply changes so that the vector calculated from the unprojection starts at the cursor position and moves into the screen.
Edit 1:
I assume it has to do with the fact that my camera is located at (0, 0, 0) in world coordinates. The camera rotates around its local x- and y-axis. From what I understand the view matrix created by the camera builds the plane onto which the image is projected. If that is the case it would explain why the ray is at a somehow "unexpected" location.
My assumption is that I need to move the camera out of the center so that it is located outside of the object. However, if simply modify the member variable m_cameraPosition of the camera my model gets totally distorted.
Anyone out there able and willing to help?
Thanks for your hint, Kapil. I tried the XMMatrixLookAtRH method but could not change the camera's pitch / yaw using that approach so I discarded that approach and came up with generating the matrix myself.
What resolved my problem was transposing the model, view and projection matrices using XMMatrixTranspose before passing them to XMVector3Unproject. So instead of having the code as follows
XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
XMMATRIX viewMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
XMMATRIX modelMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);
XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
it needs to be
XMMATRIX projectionMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection));
XMMATRIX viewMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view));
XMMATRIX modelMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model));
XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
It's not entirely clear to me why I need to transpose the matrices before passing them to the unproject method. However, I suspect that it is related to the issue that I'm facing when I move my camera. That problem has already been described here on StackOverflow by this posting.
I did not manage to solve that problem yet. Simply transposing the view matrix does not resolve it. However, my main problem is solved and my model is finally clickable.
If anyone has anything to add and shine some light on why the matrices need to be transposed or why moving the camera distorts the model please go ahead and post comments or answers.
I used the XMMatrixLookAtRH API in Model::SetViewMatrix() function to calculate the view matrix and got decent values of v and rayOrigin vectors.
For eg:
XMStoreFloat4x4(
&m_modelViewProjectionConstantBufferData->view,
XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f),
XMVectorSet(1.0f, 0.0f, 0.0f, 0.0f))
);
Though I haven't been able to visualize the output on screen, I checked the result by computing for simple values in a console application and the vector values seem to be correct. Please check in your application and confirm.
NOTE: You have to give focal point and up direction vector parameters to use XMMatrixLookAtRH API instead of your current approach.
I am able to get equal values of v and rayOrigin vectors using XMMatrixLookAtRH method as well as your custom view matrix with this code without needing matrix transpose operations:
#include <directxmath.h>
using namespace DirectX;
XMVECTOR m_cameraXAxis;
XMVECTOR m_cameraYAxis;
XMVECTOR m_cameraZAxis;
XMVECTOR m_cameraPosition;
XMMATRIX gView;
XMMATRIX gView2;
XMMATRIX gProj;
XMMATRIX gModel;
void SetViewMatrix()
{
XMVECTOR lTarget = XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f);
m_cameraPosition = XMVectorSet(1.0f, 1.0f, 1.0f, 1.0f);
m_cameraZAxis = XMVector3Normalize(XMVectorSubtract(m_cameraPosition, lTarget));
m_cameraXAxis = XMVector3Normalize(XMVector3Cross(XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f), m_cameraZAxis));
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
XMStoreFloat3(&cameraZAxis, m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(1, 0) = cameraXAxis.y;
viewMatrix(2, 0) = cameraXAxis.z;
viewMatrix(3, 0) = x;
viewMatrix(0, 1) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(2, 1) = cameraYAxis.z;
viewMatrix(3, 1) = y;
viewMatrix(0, 2) = cameraZAxis.x;
viewMatrix(1, 2) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(3, 2) = z;
viewMatrix(0, 3) = 0.0f;
viewMatrix(1, 3) = 0.0f;
viewMatrix(2, 3) = 0.0f;
viewMatrix(3, 3) = 1.0f;
gView = XMLoadFloat4x4(&viewMatrix);
gView2 = XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f),
XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f));
//m_modelViewProjectionConstantBufferData->view = viewMatrix;
printf("yo");
}
void SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
gProj = XMMatrixTranspose( XMMatrixMultiply(orthographicProjectionMatrix, orientationMatrix));
}
void SetModelMatrix()
{
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMMatrixTranspose( XMLoadFloat4x4(&orientation));
gModel = orientationMatrix;
}
bool SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
XMMATRIX projectionMatrix = gProj;
XMMATRIX viewMatrix = gView;
XMMATRIX modelMatrix = gModel;
XMMATRIX viewMatrix2 = gView2;
XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
// Code to retrieve v0, v1 and v2 is omitted
auto diff = v - rayOrigin;
auto diffNorm = XMVector3Normalize(diff);
XMVECTOR v2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix2,
modelMatrix);
XMVECTOR rayOrigin2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix2,
modelMatrix);
auto diff2 = v2 - rayOrigin2;
auto diffNorm2 = XMVector3Normalize(diff2);
printf("hi");
return true;
}
int main()
{
SetViewMatrix();
SetProjectionMatrix(1000, 1000, 0.0f, 1.0f);
SetModelMatrix();
SelectObject(500, 500, 1000, 1000);
return 0;
}
Please check your application with this code and confirm. You'll see the code is the same as your earlier code. The only addition is initial values of camera parameters, calculation of 2nd view matrix in SetViewMatrix() using XMMatrixLookAtRH method and calculating vectors using both the view matrices in SelectObject().
No need to Transpose
I did not have to transpose any matrix. Transpose should not be required for Projection and Model matrices because they are both diagonal matrices and transposing them will give the same matrix. I don't think a transpose of View matrix is required either. The formula of XMMatrixLookAtRH explained here provides the view matrix exactly like yours. Also, the sample project given here does not transpose its matrices while checking intersection. You can download and check the sample project.
Possible problem sources
1) Initialization: The only code I have not been able to see is your initialization of m_cameraZAxis, m_cameraXAxis, nearZ, farZ parameters, etc. Also, I have not used your camera rotation functions. As you can see, I have initialized camera by using position, target and direction vectors for calculation. Do check if your initial calculation of m_cameraZAxis accords with my sample code.
2) LH/RH look: Make sure there is no accidental mix-up of left-hand and right-hand looks anywhere in your code.
3) Check if your rotation code (ChangeCameraPitch or ChangeCameraYaw) is accidentally creating camera axes which are not orthogonal. You are using the camera's Y-axis as input in ChangeCameraYaw and as output in ChangeCameraPitch. But the Y-axis is being reset in SetViewMatrix by the cross-product or X and Z axes. So the earlier value of Y-axis may get lost.
Good luck with your application! Do tell if you find a proper solution and root cause to your problem.
As mentioned the issue was not fully resolved even though clicking now works. The issue with the distortion of the model when moving the camera, which I suspected is related, was still present. What I meant with "the model gets distorted" is visible in the following illustration:
The left image shows how the model looks when the camera is located in the center of the world, i.e. (0, 0, 0) while the right image shows what happens when I move the camera in negative y-axis direction. As can be seen the model widens at the bottom and gets smaller at the top which is the same behavior described in the link I already provided above.
What I eventually did to resolve both issues is:
Transpose the matrices before passing them to XMVector3Unproject (already described above)
Transposed my view matrix by changing the code of the SetViewMatrix method (code see below)
The SetViewMatrix method now looks as follows:
void Camera::SetViewMatrix()
{
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
//XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
XMStoreFloat3(&cameraZAxis, m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(0, 1) = cameraXAxis.y;
viewMatrix(0, 2) = cameraXAxis.z;
viewMatrix(0, 3) = x;
viewMatrix(1, 0) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(1, 2) = cameraYAxis.z;
viewMatrix(1, 3) = y;
viewMatrix(2, 0) = cameraZAxis.x;
viewMatrix(2, 1) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(2, 3) = z;
viewMatrix(3, 0) = 0.0f;
viewMatrix(3, 1) = 0.0f;
viewMatrix(3, 2) = 0.0f;
viewMatrix(3, 3) = 1.0f;
m_modelViewProjectionConstantBufferData->view = viewMatrix;
}
So I just exchanged row and column coordinates. Note that I had to make sure that my ChangeCameraYaw method gets called before my ChangeCameraPitch method. This is necessary because the orientation of the model otherwise is not as I want it.
There is also another approach that could be used. Instead of transposing the view matrix by exchanging the row and column coordinates and transposing it before passing it to XMVector3Unproject I could use the row_major keyword in the vertex shader together with the view matrix:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
row_major matrix view;
matrix projection;
};
I came across this idea in this blog post. The keyword row_major influences on how the shader compiler interprets the matrix in memory. The same could also be achieved by changing the order of the vector * matrix multiplication in the vertex shader, i.e. using pos = mul(view, pos); instead of pos = mul(pos, view);
That's pretty much it. The two issues are indeed interconnected but using what I posted in this question I was able to resolve both issues so I'm accepting my own reply as answer to this question. Hope it helps someone in the future.

Deferred Rendering strange behaviour

I'm having a little trouble implementing a deferred rendering engine using OpenGL.
I can render to texture and all the datas are correct for the first pass (calculating albedo, normals and depth), but when it comes to calculate the textures for the lightning (emissive and specular) I'm having some troubles.
This is what I've got so far:
The problem is, as you might have guessed, is that sort of line showing some sort of division between my red and blue light.
Using NSight to see the history of a pixel right next to the line that should have been red (or blended with a blue light pixel), I can see this:
So the pixel IS actually getting colored by the red light, but then it goes back to blue, and I really can't understand why.
This is the code I use to do deferred drawing:
//
// Preparing albedo, normals and depth
//
GetGBuffer("PrepassBuffer")->BindAsRenderTarget();
Start(stage);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
DeferredPreparePass(stage, shadersManager->GetProgramFromName("deferred-prepare"));
Finish(stage);
//
// Peparing light emissive and specular textures
//
GetGBuffer("LightsBuffer")->BindAsRenderTarget();
Start(stage);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
DeferredPointPass(stage, shadersManager->GetProgramFromName("deferred-point"));
glCullFace(GL_BACK);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Finish(stage);
//
// Final composition of the image
//
Start(stage);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
DeferredCombinePass(stage, shadersManager->GetProgramFromName("deferred-combine"));
Finish(stage);
This is the code I use to do the point light pass (DeferredPointPass function call above) (_pointLightMesh is a Icosahedron Sphere mesh)
for (ASizeT i = 0; i < nLights; i++)
{
AnimaLight* light = lightsManager->GetLight((AUint)i);
if (!light->IsPointLight())
continue;
AnimaVertex3f lPos = light->GetPosition();
AFloat range = light->GetRange();
AnimaMatrix m1, m2, m3;
m1.Translate(lPos);
m2.Scale(range, range, range, 1.0f);
m3 = m1 * m2;
float dist = (lPos - activeCamera->GetPosition()).Length();
if (dist < light->GetRange())
glCullFace(GL_FRONT);
else
glCullFace(GL_BACK);
program->UpdateLightProperies(light);
program->UpdateMeshProperies(_pointLightMesh, m3);
program->UpdateRenderingManagerProperies(this);
program->EnableInputs(_pointLightMesh);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _pointLightMesh->GetIndexesBufferObject());
glDrawElements(GL_TRIANGLES, _pointLightMesh->GetFacesIndicesCount(), GL_UNSIGNED_INT, 0);
program->DisableInputs();
}
And this is the shader I use to calculate my emissive and specular maps for the point lightn pass:
#version 150 core
in mat4 frag_inverseProjectionViewMatrix;
out vec4 FragColor[2];
uniform sampler2D REN_GB_PrepassBuffer_DepthMap;
uniform sampler2D REN_GB_PrepassBuffer_NormalMap;
uniform vec2 REN_InverseScreenSize;
uniform vec3 CAM_Position;
uniform float PTL_Range;
uniform vec3 PTL_Position;
uniform vec3 PTL_Color;
uniform float PTL_ConstantAttenuation;
uniform float PTL_LinearAttenuation;
uniform float PTL_ExponentAttenuation;
void main()
{
vec3 pos = vec3((gl_FragCoord.x * REN_InverseScreenSize.x), (gl_FragCoord.y * REN_InverseScreenSize.y), 0.0f);
pos.z = texture(REN_GB_PrepassBuffer_DepthMap, pos.xy).r;
vec3 normal = normalize(texture(REN_GB_PrepassBuffer_NormalMap, pos.xy).xyz * 2.0f - 1.0f);
vec4 clip = frag_inverseProjectionViewMatrix * vec4(pos * 2.0f - 1.0f, 1.0f);
pos = clip.xyz / clip.w;
float dist = length(PTL_Position - pos);
if(dist > PTL_Range)
{
discard;
}
float atten = (PTL_ConstantAttenuation + PTL_LinearAttenuation * dist + PTL_ExponentAttenuation * dist * dist + 0.00001);
vec3 incident = normalize(PTL_Position - pos);
vec3 viewDir = normalize(CAM_Position - pos);
vec3 halfDir = normalize(incident + viewDir);
float lambert = clamp(dot(incident, normal), 0.0f, 1.0f);
float rFactor = clamp(dot(halfDir, normal), 0.0f, 1.0f);
float sFactor = pow(rFactor, 33.0f);
FragColor[0] = vec4(PTL_Color * lambert / atten, 1.0f);
FragColor[1] = vec4(PTL_Color * sFactor / atten * 0.33f, 1.0f);
}
I hope someone has and idea of what's going on here, thanks in advance!
Ok I figured out what I was missing. I needed to disable depth test before doing the point light pass and the re-enable it to combine the final image.
My code for the deferred rendering function now looks like this (all the rest is still the same):
// Same as before
// ...
//
//
// Peparing light emissive and specular textures
//
GetGBuffer("LightsBuffer")->BindAsRenderTarget();
Start(stage);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(GL_FALSE); // <---- this is what I was missing
DeferredPointPass(stage, shadersManager->GetProgramFromName("deferred-point"));
glCullFace(GL_BACK);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(GL_TRUE); // <---- this is what I was missing
Finish(stage);
// Same as before
// ...
//

how to convert XMVECTOR to D3DVECTOR and to D3DCOLORVALUE in DirectX 9?

I learn DirectX (DirectX 9) from www.directxtutorial.com and using visual studio 2012 in windows 8. d3dx9 (d3dx) replace by other header like DirectXMath, therefore I replaced all that is needed, but there is a problem - convert XMVECTOR to D3DVECTOR and to D3DCOLORVALUE.
The problem code (The problem written - /problem!/):
void init_light(void) {
D3DLIGHT9 light; // create the light struct
D3DMATERIAL9 material; // create the material struct
ZeroMemory(&light, sizeof(light)); // clear out the light struct for use
light.Type = D3DLIGHT_DIRECTIONAL; // make the light type 'directional light'
light.Diffuse = D3DXCOLOR(0.5f, 0.5f, 0.5f, 1.0f); /*problem!*/ // set the light's color
light.Direction = D3DXVECTOR3(-1.0f, -0.3f, -1.0f); /*problem!*/
d3ddev->SetLight(0, &light); // send the light struct properties to light #0
d3ddev->LightEnable(0, TRUE); // turn on light #0
ZeroMemory(&material, sizeof(D3DMATERIAL9)); // clear out the struct for use
material.Diffuse = D3DXCOLOR(1.0f, 1.0f, 1.0f, 1.0f); /*problem!*/ // set diffuse color to white
material.Ambient = D3DXCOLOR(1.0f, 1.0f, 1.0f, 1.0f); /*problem!*/ // set ambient color to white
d3ddev->SetMaterial(&material); /*set the globably-used material to &material*/ }
You should replace all D3DVECTOR with XMVECTOR and D3DXCOLOUR with XMCOLOR.

OpenGL convert/translate/normalize coordinates?

I am looking for some examples of (non-deprecated code) OpenGL that actually show code/conversion, step-by-step, between coordinates from start to finish, in a 2D way that use non-standard coordinates. i.e.
GLfloat Square[] =
{
-5.5f, -5.0f, 0.0f, 1.0f,
-5.5f, 5.0f, 0.0f, 1.0f,
5.5f, 5.0f, 0.0f, 1.0f,
5.5f, -5.0f, 0.0f, 1.0f
};
And then using coordinates such as the above, taking these & convert them into necessary coordinates so they map to -1,1 etc & output to screen.
Every single example, tutorial I have found on the net etc, only do so using coordinates in the above range between -1<>1.
Environment: C++.
Well if you use glm (http://glm.g-truc.net/index.html) you can just use glm::ortho to create a view matrix in the same way you would use glOrtho in old-style OpenGL.
EG:
glm::mat4 viewMatrix = glm::ortho(-5.0f, 5.0f, -5.0f, 5.0f, 0.0f, 1.0f);
If you then plug that into your shader you should get a mapping from -5 to +5 instead of -1 to +1, or whatever scale it is that you want.
when you use 'no' transformations in your project then you simply need to use coordinates that are placed in "normalized device space". That is actually a box ranging from -1 to 1 at each axis.
in that case in the vertex shader there will be line similar to this:
gl_Position = attribVertexPos; // no transformation
For 2D, if you want to provide coords from different range, all you need to do is simply scale position:
for range -10 to 10 use vertex shader with code gl_Position = attribVertexPos * 0.1
or you can look for a scaling matrix and use it as well