First for the part that does work using XMMATH where data.model is a XMMatrix:
static auto model_matrix = DirectX::XMMatrixIdentity();
static auto pos = DirectX::XMVectorSet(0.0f, 0.0f, -10.0f, 0.0f);
static auto focus = DirectX::XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f);
static auto up = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
static auto view_matrix = DirectX::XMMatrixLookAtLH(pos, focus, up);
static auto proj_matrix = DirectX::XMMatrixPerspectiveFovLH(glm::radians(45.0f), 16.0f / 9.0f, 0.1f, 10000.0f);
building the mvp:
data.model = model_matrix * view_matrix * proj_matrix;
data.model = DirectX::XMMatrixTranspose(data.model);
when I hand the data.model over to my HLSL shader, everything works fine and and I can change the pos vector to look at my cube from different angles. HLSL vertex shader:
cbuffer myCbuffer : register(b0) {
float4x4 mat;
}
float4 main(float3 pos : POSITION) : SV_POSITION
{
return mul(float4(pos, 1), mat);
}
Now when I try to make something similar using GLM (I changed the type of data.model to be a glm::mat4 ):
auto gl_m = glm::mat4(1.0f);
static auto gl_pos = glm::vec3(0.0f, 0.0f, -10.0f);
static auto gl_focus = glm::vec3(0.0f, 0.0f, 0.0f);
static auto gl_up = glm::vec3(0.0f, 1.0f, 0.0f);
auto gl_v = glm::lookAtLH(gl_pos, gl_focus, gl_up);
auto gl_p = glm::perspectiveFovLH_ZO(glm::radians(45.0f), 1280.0f, 720.0f, 0.1f, 10000.0f);
building the MVP:
data.model = gl_m * gl_v * gl_p;
Now when I pass this to data.model the cube does get rendered but the whole screen is filled black. (my cube is black and the clear color is light-blue, so I think it's rendering the cube but really close or inside it).
I don't know where to look on how to fix this, the projection matrix should be in the correct clipping space since I'm using perspectiveFovLH_ZO, the ZO fixes the clipping space to [0..1]. It could be how the HLSL shader float4x4 deals with the glm::mat4, but both are column major I believe so no need to transpose.
It might have something to do with the rasterizer culling settings and the FrontCounterClockwise setting, but I'm fairly new to DirectX and don't know what it does exactly.
D3D11_RASTERIZER_DESC raster_desc = {0};
raster_desc.FillMode = D3D11_FILL_MODE::D3D11_FILL_SOLID;
raster_desc.CullMode = D3D11_CULL_MODE::D3D11_CULL_NONE;
raster_desc.FrontCounterClockwise = false;
d3device->CreateRasterizerState(&raster_desc, rasterize_state.GetAddressOf());
Any help is appreciated, let me know if I forgot anything.
Managed to fix it (seems like a bandaid fix but I'll work with it for now).
After I build the mvp I added:
gl_mvp[3][3] = -10.0f;
gl_mvp = glm::transpose(gl_mvp);
the -10 is the z coordinate of the camera position (the same one you pass as the first argument to glm::lookAtLH). I thought HLSL/DirectX matched the column-major GLM matrices but apparently not, it needed the extra transpose call. I'm not sure why that is and what the bottom left element of the MVP is that it has to match the positional z, maybe someone with a better understanding of the math behind it can clarify.
Removed gl_mvp[3][3] = -10.0f; since it's a bad fix, instead I got this now:
changed both the lookAtLH and PerspectiveFovLH_ZO to their RH variants.
Also changed the order of building the MVP from M * V * P to P * V * M.
New code that seems to work well (even flying around using my Camera class):
auto gl_m = glm::mat4(1.0f);
auto gl_v = glm::lookAtRH(position, position + dir, {0, 1, 0});
auto gl_p = glm::perspectiveFovRH_ZO(glm::radians(FOV), 1280.0f, 720.0f, 0.1f, 10000.0f);
glm::mat4 gl_mvp = gl_p * gl_v * gl_m;
return glm::transpose(gl_mvp);
It's a bit different from the previous code cause this is inside my Camera class, so position, dir and FOV are variables I keep track off but you get the idea. Passed this return result to my HLSL shader and all seems well so far.
Related
I have what I believed to be a basic need: from "2D position of the mouse on the screen", I need to get "the closest 3D point in the 3D world". Looks like ray-tracing common problematic (even if it's not mine).
I googled / read a lot: looks like the topic is messy and lots of things gets unfortunately quickly intricated. My initial problem / need involves lots of 3D points what I do not know (meshes or point cloud from the internet), so, it's impossible to understand what result you should expect! Thus, I decided to create simple shapes (triangle, quadrangle, cube) with points that I know (each coord of each point is 0.f or 0.5f in local frame), and, try to see if I can "recover" 3D point positions from the mouse cursor when I move it on the screen.
Note: all coord of all points of all shapes are known values like 0.f or 0.5f. For example, with the triangle:
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
What I do
I have a 3D OpenGL renderer where I added a GUI to have controls on the rendered scene
Transformations: tx, ty, tz, rx, ry, rz are controls that enables to change the model matrix. In code
// create transformations: model represents local to world transformation
model = glm::mat4(1.0f); // initialize matrix to identity matrix first
model = glm::translate(model, glm::vec3(tx, ty, tz));
model = glm::rotate(model, glm::radians(rx), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, glm::radians(ry), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, glm::radians(rz), glm::vec3(0.0f, 0.0f, 1.0f));
ourShader.setMat4("model", model);
model changes only the position of the shape in the world and has no connection with the position of the camera (that's what I understand from tutorials).
Camera: from here, I ended-up with a camera class that holds view and proj matrices. In code
// get view and projection from camera
view = cam.getViewMatrix();
ourShader.setMat4("view", view);
proj = cam.getProjMatrix((float)SCR_WIDTH, (float)SCR_HEIGHT, near, 100.f);
ourShader.setMat4("proj", proj);
The camera is a fly-like camera that can be moved when moving the mouse or using keyboard arrows and that does not act on model, but only on view and proj (that's what I understand from tutorials).
The shader then uses model, view and proj this way:
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
void main()
{
// note that we read the multiplication from right to left
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
Screen to world: as using glm::unProject didn't always returned results I expected, I added a control to not use it (back-projecting by-hand). In code, first I get the cursor mouse position frame3DPos following this
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouseCursorCallback(GLFWwindow* window, double xposIn, double yposIn)
{
// screen to world transformation
xposScreen = xposIn;
yposScreen = yposIn;
int windowWidth = 0, windowHeight = 0; // size in screen coordinates.
glfwGetWindowSize(window, &windowWidth, &windowHeight);
int frameWidth = 0, frameHeight = 0; // size in pixel.
glfwGetFramebufferSize(window, &frameWidth, &frameHeight);
glm::vec2 frameWinRatio = glm::vec2(frameWidth, frameHeight) /
glm::vec2(windowWidth, windowHeight);
glm::vec2 screen2DPos = glm::vec2(xposScreen, yposScreen);
glm::vec2 frame2DPos = screen2DPos * frameWinRatio; // window / frame sizes may be different.
frame2DPos = frame2DPos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention.
glm::vec3 frame3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
frame3DPos.x = frame2DPos.x;
frame3DPos.y = frameHeight - 1.0f - frame2DPos.y; // GL's window origin is at the bottom left
frame3DPos.z = 0.f;
glReadPixels((GLint) frame3DPos.x, (GLint) frame3DPos.y, // CAUTION: cast to GLint.
1, 1, GL_DEPTH_COMPONENT,
GL_FLOAT, &zbufScreen); // CAUTION: GL_DOUBLE is NOT supported.
frame3DPos.z = zbufScreen; // z-buffer.
And then I can call glm::unProject or not (back-projecting by-hand) according to controls in GUI
glm::vec3 world3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
if (screen2WorldUsingGLM) {
glm::vec4 viewport(0.0f, 0.0f, (float) frameWidth, (float) frameHeight);
world3DPos = glm::unProject(frame3DPos, view * model, proj, viewport);
} else {
glm::mat4 trans = proj * view * model;
glm::vec4 frame4DPos(frame3DPos, 1.f);
frame4DPos = glm::inverse(trans) * frame4DPos;
world3DPos.x = frame4DPos.x / frame4DPos.w;
world3DPos.y = frame4DPos.y / frame4DPos.w;
world3DPos.z = frame4DPos.z / frame4DPos.w;
}
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
Z-buffering is always allowed whatever the shape is 2D (triangle, quadrangle) or 3D (cube). In code
glEnable(GL_DEPTH_TEST); // Enable z-buffer.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // also clear the z-buffer
In picture I get
The camera is positioned at (0., 0., 0.) and looks "ahead" (front = -z as z-axis is positive from screen to me). The shape is positioned (using tx, ty, tz, rx, ry, rz) "in front of the camera" with tz = -5 (5 units following the front vector of the camera)
What I get
Triangle in initial setting
I have correct xpos and ypos in world frame but incorrect zpos = 0. (z-buffering is allowed). I expected zpos = -5 (as tz = -5).
Question: why zpos is incorrect?
If I do not use glm::unProject, I get outer space results
Question: why "back-projecting" by-hand doesn't return consistent results compared to glm::unProject? Is this logical? Arethey different operations? (I believed they should be equivalent but they are obviously not)
Triangle moved with translation
After translation of about tx = 0.5 I still get same coordinates (local frame) where I expected to have previous coord translated along x-axis. Not using glm::unProject returns oute-space results here too...
Question: why translation (applied by model - not view nor proj) is ignored?
Cube in initial setting
I get correct xpos, ypos and zpos?!... So why is this not working the same way with the "2D" triangle (which is "3D" one to me, so, they should behave the same)?
Cube moved with translation
Translated along ty this time seems to have no effect (still get same coordinates - local frame).
Question: like with triangle, why translation is ignored?
What I'd like to get
The main question is why the model transformation is ignored? If this is to be expected, I'd like to understand why.
If there's a way to recover the "true" position of the shape in the world (including model transformation) from the position of the mouse cursor, I'd like to understand how.
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
As I am new to OpenGL, I didn't get that object coordinates from glm::unProject doc is another way to refer to local space. Solution: pass view*model to glm::unProject and apply model again, or, pass view to glm::unProject as explained here: Screen Coordinates to World Coordinates.
This fixes all weird behaviors I observed.
Right now I am working with shadow maps in my game engine. In the code below I compute View-Projection matrix for directional light source. I have a fixed projection-box size (=50), so now to light-up a box (-50; 50) in all directions positioned in the world center. It works correctly, but I want it to follow camera in the way such that the its position will always be the center of this box. How to do this?
Matrix4x4 DirectionalLight::GetMatrix() const
{
Vector3 position = Camera::GetPosition();
float sizeLx = -this->ProjectionSize;
float sizeRx = +this->ProjectionSize;
float sizeLy = -this->ProjectionSize;
float sizeRy = +this->ProjectionSize;
float sizeLz = -this->ProjectionSize;
float sizeRz = +this->ProjectionSize;
Matrix4x4 OrthoProjection = MakeOrthographicMatrix(sizeLx, sizeRx, sizeLy, sizeRy, sizeLz, sizeRz);
Matrix4x4 LightView = MakeViewMatrix(
this->Direction,
MakeVector3(0.0f, 0.0f, 0.0f),
MakeVector3(0.0f, 1.0f, 0.0f)
);
return OrthoProjection * LightView;
}
I am using glm as math library, most functions are aliases/wrappers: MakeOrthographicMatrix is glm::ortho, MakeViewMatrix is glm::lookAt
I've got two planets, the sun and the earth. I want them to spin on their own axis, and orbit the planet at the same time.
I can get these two behaviors to work individually, but I'm stumped as to how to combine them.
void planet::render() {
mat4 axisPos = mat4(1.0f);
axialRotation = rotate(axisPos, axisRotationSpeedConstant, vec3(0.0f, 1.0f, 0.0f));
if (hostPlanet != NULL) {
mat4 hostTransformPosition = mat4(1.0f);
hostTransformPosition[3] = hostPlanet->getTransform()[3];
orbitalSpeed += orbitalSpeedConstant;
orbitRotation = rotate(hostTransformPosition, orbitalSpeed, vec3(0.0f, 1.0f, 0.0f));
orbitRotation = translate(orbitRotation, vec3(distanceFromParent, 0.0f, 0.0f));
//rotTransform will make them spin on their axis, but not orbit their parent planet
mat4 rotTransform = transform * axialRotation;
//transform *= rotTransform;
//orbitRotation will make the planet orbit, but it won't spin on it's own axis.
transform = orbitRotation;
}
else {
transform *= axialRotation;
}
glUniform4fv(gColLoc, 1, &color[0]);
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &getTransform()[0][0]);
glDrawArrays(GL_LINES, 0, NUMVERTS);
};
Woohoo! As usual, asking the question lead me to being able to answer it. After the last line, knowing that transform[0 to 2] represents the rotation in the 4x4 matrix (with transform[3] representing the position in 3D space), I thought to replace the rotation from the previous matrix calculation with the current one.
Badabing, I got my answer.
transform = orbitRotation;
transform[0] = rotTransform[0];
transform[1] = rotTransform[1];
transform[2] = rotTransform[2];
I'm trying to change my camera projection from perspective to orthographic.
At the moment my code is working fine with the perspective projection
m_prespective = glm::perspective(70.0f, (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_prespective * glm::lookAt(m_position, m_forward, m_up);
But as soon as i change it to orthographic projection I can't see my mesh anymore.
m_ortho = glm::ortho(0.0f, (float)DISPLAY_WIDTH, (float)DISPLAY_HEIGHT,5.0f, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_ortho * glm::lookAt(m_position, m_forward, m_up);
I don't understand what I'm doing wrong.
In perspective projection the term (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT is evaluating the picture aspect ratio. This number is going to be close to 1. The left and right clip plane distances at the near plane for perspective projection is aspect * near_distance. More interesting though is the expanse of left-right at the viewing distance, which in your case is abs(m_position.z)= abs(mesh.radius).
Carrying this over to orthographic projection the left, right, top and bottom clip plane distances should be of the same order of magnitude, so given that aspect is close to 1 the values for left, right, bottom and top should be close to the value of abs(mesh.radius). The resolution of the display in pixels is totally irrelevant except for the aspect ratio.
Furthermore when using a perspective projection the value for near should be chosen as large as possible so that all desired geometry is visible. Doing otherwise will waste precious depth buffer resolution.
float const view_distance = mesh.radius + 1;
float const aspect = (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT;
switch( typeof_projection ){
case perspective:
m_projection = glm::perspective(70.0f, aspect, 1.f, 1000.0f);
break;
case ortho:
m_projection = glm::ortho(
-aspect * view_distance,
aspect * view_distance,
view_distance,
view_distance,
-1000, 1000 );
break;
}
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -view_distance);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_projection * glm::lookAt(m_position, m_forward, m_up);
In order to calculate the projection view matrix for a directional light I take the vertices of the frustum of my active camera, multiply them by the rotation of my directional light and use these rotated vertices to calculate the extends of an orthographic projection matrix for my directional light.
Then I create the view matrix using the center of my light's frustum bounding box as the position of the eye, the light's direction for the forward vector and then the Y axis as the up vector.
I calculate the camera frustum vertices by multiplying the 8 corners of a box with 2 as size and centered in the origin.
Everything works fine and the direction light projection view matrix is correct but I've encountered a big issue with this method.
Let's say that my camera is facing forward (0, 0, -1), positioned on the origin and with a zNear value of 1 and zFar of 100. Only objects visible from my camera frustum are rendered into the shadow map, so every object that has a Z position between -1 and -100.
The problem is, if my light has a direction which makes the light come from behind the camera and the is an object, for example, with a Z position of 10 (so behind the camera but still in front of the light) and tall enough to possibly cast a shadow on the scene visible from my camera, this object is not rendered into the shadow map because it's not included into my light frustum, resulting in an error not casting the shadow.
In order to solve this problem I was thinking of using the scene bounding box to calculate the light projection view Matrix, but doing this would be useless because the image rendered into the shadow map cuold be so large that numerous artifacts would be visible (shadow acne, etc...), so I skipped this solution.
How could I overcome this problem?
I've read this post under the section of 'Calculating a tight projection' to create my projection view matrix and, for clarity, this is my code:
Frustum* cameraFrustum = activeCamera->GetFrustum();
Vertex3f direction = GetDirection(); // z axis
Vertex3f perpVec1 = (direction ^ Vertex3f(0.0f, 0.0f, 1.0f)).Normalized(); // y axis
Vertex3f perpVec2 = (direction ^ perpVec1).Normalized(); // x axis
Matrix rotationMatrix;
rotationMatrix.m[0] = perpVec2.x; rotationMatrix.m[1] = perpVec1.x; rotationMatrix.m[2] = direction.x;
rotationMatrix.m[4] = perpVec2.y; rotationMatrix.m[5] = perpVec1.y; rotationMatrix.m[6] = direction.y;
rotationMatrix.m[8] = perpVec2.z; rotationMatrix.m[9] = perpVec1.z; rotationMatrix.m[10] = direction.z;
Vertex3f frustumVertices[8];
cameraFrustum->GetFrustumVertices(frustumVertices);
for (AInt i = 0; i < 8; i++)
frustumVertices[i] = rotationMatrix * frustumVertices[i];
Vertex3f minV = frustumVertices[0], maxV = frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
minV.x = min(minV.x, frustumVertices[i].x);
minV.y = min(minV.y, frustumVertices[i].y);
minV.z = min(minV.z, frustumVertices[i].z);
maxV.x = max(maxV.x, frustumVertices[i].x);
maxV.y = max(maxV.y, frustumVertices[i].y);
maxV.z = max(maxV.z, frustumVertices[i].z);
}
Vertex3f extends = maxV - minV;
extends *= 0.5f;
Matrix viewMatrix = Matrix::MakeLookAt(cameraFrustum->GetBoundingBoxCenter(), direction, perpVec1);
Matrix projectionMatrix = Matrix::MakeOrtho(-extends.x, extends.x, -extends.y, extends.y, -extends.z, extends.z);
Matrix projectionViewMatrix = projectionMatrix * viewMatrix;
SceneObject::SetMatrix("ViewMatrix", viewMatrix);
SceneObject::SetMatrix("ProjectionMatrix", projectionMatrix);
SceneObject::SetMatrix("ProjectionViewMatrix", projectionViewMatrix);
And this is how I calculate the frustum and it's bounding box:
Matrix inverseProjectionViewMatrix = projectionViewMatrix.Inversed();
Vertex3f points[8];
_frustumVertices[0] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, -1.0f); // near top-left
_frustumVertices[1] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, -1.0f); // near top-right
_frustumVertices[2] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, -1.0f); // near bottom-left
_frustumVertices[3] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, -1.0f); // near bottom-right
_frustumVertices[4] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, 1.0f); // far top-left
_frustumVertices[5] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, 1.0f); // far top-right
_frustumVertices[6] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, 1.0f); // far bottom-left
_frustumVertices[7] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, 1.0f); // far bottom-right
_boundingBoxMin = _frustumVertices[0];
_boundingBoxMax = _frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
_boundingBoxMin.x = min(_boundingBoxMin.x, _frustumVertices[i].x);
_boundingBoxMin.y = min(_boundingBoxMin.y, _frustumVertices[i].y);
_boundingBoxMin.z = min(_boundingBoxMin.z, _frustumVertices[i].z);
_boundingBoxMax.x = max(_boundingBoxMax.x, _frustumVertices[i].x);
_boundingBoxMax.y = max(_boundingBoxMax.y, _frustumVertices[i].y);
_boundingBoxMax.z = max(_boundingBoxMax.z, _frustumVertices[i].z);
}
_boundingBoxCenter = Vertex3f((_boundingBoxMin.x + _boundingBoxMax.x) / 2.0f, (_boundingBoxMin.y + _boundingBoxMax.y) / 2.0f, (_boundingBoxMin.z + _boundingBoxMax.z) / 2.0f);