OpenGL convert/translate/normalize coordinates? - c++

I am looking for some examples of (non-deprecated code) OpenGL that actually show code/conversion, step-by-step, between coordinates from start to finish, in a 2D way that use non-standard coordinates. i.e.
GLfloat Square[] =
{
-5.5f, -5.0f, 0.0f, 1.0f,
-5.5f, 5.0f, 0.0f, 1.0f,
5.5f, 5.0f, 0.0f, 1.0f,
5.5f, -5.0f, 0.0f, 1.0f
};
And then using coordinates such as the above, taking these & convert them into necessary coordinates so they map to -1,1 etc & output to screen.
Every single example, tutorial I have found on the net etc, only do so using coordinates in the above range between -1<>1.
Environment: C++.

Well if you use glm (http://glm.g-truc.net/index.html) you can just use glm::ortho to create a view matrix in the same way you would use glOrtho in old-style OpenGL.
EG:
glm::mat4 viewMatrix = glm::ortho(-5.0f, 5.0f, -5.0f, 5.0f, 0.0f, 1.0f);
If you then plug that into your shader you should get a mapping from -5 to +5 instead of -1 to +1, or whatever scale it is that you want.

when you use 'no' transformations in your project then you simply need to use coordinates that are placed in "normalized device space". That is actually a box ranging from -1 to 1 at each axis.
in that case in the vertex shader there will be line similar to this:
gl_Position = attribVertexPos; // no transformation
For 2D, if you want to provide coords from different range, all you need to do is simply scale position:
for range -10 to 10 use vertex shader with code gl_Position = attribVertexPos * 0.1
or you can look for a scaling matrix and use it as well

Related

Create an OpenGL 2D view camera but using model view projection cameras

I am trying to create a 2D, top down, style camera in OpenGL. I would like to stick to the convention of using model-view-projection matrices, this is so I can switch between a 3D view and a top down view while the application runs. I am actually using the glm::lookAt method to create the view matrix.
However there is something missing in my understanding. I am rendering a triangle on the screen, [very close to this tutorial][1], and that works perfectly fine (so no problems with windowing, display loops, vertex buffers, etc). The triangle is centered at (0, 0), and vertices are on -0.5/0.5 (so already in NDC).
I then added a uniform mat4 mpv; to the vertex shader. If I set the mpv matrix to:
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
I get the same, unmodified triangle as expected as these are (from my understanding) the default values for OpenGL.
Now I thought if I changed the Z value of the camera position it would have the same effect as zooming in and out, however all I get is the clear color, no triangle is rendered.
// Trying to simulate zoom in and out by changing z value of camera
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -3.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
So I printed the view matrix, and noticed that all I was doing was translating the Z value, which makes sense.
I then added an ortho projection matrix, to make sure everything is in NDC, but I still get nothing.
// *2 cause im on a Mac/high-res screen and the frame buffer scale is 2.
// Doing projection * view in one step and just updating view uniform until I get it working.
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
Where is my misunderstanding taking place. I would like to:
Simulate a top down view where I can zoom in and out on the target.
Create a 2D camera that follows a target (racing car), so the camera_pos XY and target_pos XY will be the same.
Eventually add an option to switch to a 3D following camera, like a standard racing game 3rd person view, hence the MPV vs just using simple translations.
[1]: https://learnopengl.com/Getting-started/Hello-Triangle
The vertex coordinates are in the range [-0.5, 0.5], but the orthographic projection projects the cuboid volume with the left, bottom, near point _(0, 0, 0.1) and the right, top, far point (800.0 * 2, 600.0 * 2, 100) of the viewport.
Therefore, the triangle mesh just covers one fragment in the lower left of the viewport.
Change the orthographic projection:
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
view = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.1f, 100.0f) * view;

Incorrect ray direction from inverse vp matrix and camera position

I have a problem with my ray generation that I do not understand. The direction for my ray is computed wrongly. I ported this code from DirectX 11 to Vulkan, where it works fine, so I was surprised I could not get it to work:
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz - camPos.xyz);
Yet this code works perfectly:
vec4 nearPos = inverseViewProj * vec4(screenPos, 0, 1);
nearPos /= nearPos.w;
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz – nearPos.xyz);
[Edit] Matrix and camera positions are set like this:
const glm::mat4 clip(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 1.0f);
projMatrix = clip * glm::perspectiveFov(FieldOfView, float(ViewWidth), float(ViewHeight), NearZ, FarZ);
viewMatrix = glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
buffer.inverseViewProjMatrix = glm::inverse(projMatrix * viewMatrix);
buffer.camPos = viewMatrix[3];
[Edit2] What I see on screen is correct if I start at the origin. However, if I move left, for example, it looks as if I am moving right. All my rays seem to be perturbed. In some cases, strafing the camera looks as if I am moving around a different point in space. I assume the camera position is not equal to the singularity of my perspective matrix, yet I can not figure out why.
I think I am misunderstanding something basic. What am I missing?
Thanks to the comments I have found the problem. I was building my view matrix incorrectly, in the exact same way as in this post:
glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
This is equal to translating first and then rotating, which of course leads to something unwanted. In addition, the Position was negative and camPos was obtained using the last column of the view matrix instead of the inverse view matrix, which is wrong.
It was not noticable with my fractal raycaster simply because I never moved far away from the origin. That, and the fact that there is no point of reference in such an environment.

Why does my vector not rotate correctly OpenGL/GLM?

I am trying to learn how to do some transformations on 3d points in OpenGL. Using this cheat sheet I believe that I have the correct matrix to multiply to my vector which I want to rotate. However, when I multiply and print the new coord, I believe that it is incorrect. (Rotating 1,0,0 90deg cc should result in 0,1,0 correct?) Why is this not working?
My code:
glm::vec4 vec(1.0f, 0.0f, 0.0f, 1.0f);
glm::mat4 trans = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, cos(glm::radians(90.0f)), -sin(glm::radians(90.0f)), 0.0f,
0.0f, sin(glm::radians(90.0f)), cos(glm::radians(90.0f)), 0,
0.0f, 0.0f, 0.0f, 1.0f
};
vec = trans * vec; //I should get 0.0, 1.0, 0.0 right?
std::cout << vec.x << ", " << vec.y << ", " << vec.z << std::endl;
The above prints 1.0, 0.0, 0.0 indicating that there was no change at all?
I also tried using the rotate function in GLM to generate my matrix rather then manually specifying but I still did not get what I think should be correct (I got a different wrong answer).
glm::mat4 trans = glm::rotate(trans, 90.0f, glm::vec3(0.0, 0.0, 1.0)); //EDIT: my bad, should've been z axis not x, still not working
The above prints: -2.14..e+08, -2.14..e+08, -2.14..e+08
(PS: I just took Geometry in the previous school year, my apologies if the math is incorrect. I have a basic understanding of matrices and matrix multiplication that I picked up today to learn OpenGL transformations but other then that I'm sort of a noob at this)
In your code, you're rotating a unit vector on the x-axis
around the x-axis and that doesn't change the vector (imagine
rotating a pencil around itself, the direction doesn't change at all).
To achieve what you previously wanted, you should rotate the
vector around the z-axis using the matrix like this:
glm::mat4 trans = {
cos(glm::radians(90.0f)), -sin(glm::radians(90.0f)), 0.0f, 0.0f,
sin(glm::radians(90.0f)), cos(glm::radians(90.0f)), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
};
Besides that, glm::mat4 trans = glm::rotate(trans, 90.0f, glm::vec3(0.0, 0.0, 1.0)); isn't returning the desired result because trans need to be initialized before passing it to glm::rotate. Try writing it like this:
glm::mat4 trans; // Default constructor called, trans is an identity matrix
trans = glm::rotate(trans, glm::radians(90.0f), glm::vec3(0.0, 0.0, 1.0));
Still, you might not get the expected vector (0.0f, 1.0f, 0.0f) due to precision loss when using glm::radians and cos/sen. In my test, I got the vector (-4.37113883e-008f, 1.0f, 0.0f) and -4.37113883e-008 is really -0.0000000437113883 which is a number very close to 0, the expected result.
The reason why your own rotation matrix is not changing the input is simple: your rotation only affects y and z coordinates and since those are zero the result is exactly the same as the input. X coordinate has a multiplier of 1 into the output x coordinate so that stays the same.
You can make the vector 1.0, 2.0, 3.0, 1.0 for example and then you will see changes.
As for the glm version, can't say why it would give strange result, I have never had issues with them but haven't used much.

Direct3D (v10) Multiple World Transformations

In my code I am trying to run two (at the moment, probably more in the future) matrix transformations on my world matrix.
Like so:
D3DXMatrixRotationY(&worldMatrix, rotation);
D3DXMatrixTranslation(&worldMatrix, 0.0f, -1.0f, 0.0f);
where rotation is a changing float and worldMatrix is a D3DXMATRIX. My problem is that only the last line of code in the transformation statements works. So in the above code, the worldMatrix would get translated, but not rotated. But if I switch the order of the two statements, the worldMatrix will get rotated, but not translated. However, I played around with it, and this code works just fine:
D3DXMatrixRotationY(&worldMatrix, rotation);
D3DXMATRIX temp = worldMatrix;
D3DXMatrixTranslation(&worldMatrix, 0.0f, -1.0f, 0.0f);
worldMatrix *= temp;
After this, the worldMatrix is translated and rotated. Why doesn't it work if I only use the variables and don't include the temp matrix? Thank you!!
D3DXMatrixTranslation takes an output parameter as 1st parameter. The created matrix is written to that matrix, overriding the already present elements in that matrix. The matrices are not automatically multiplied by that call.
Your new code is fine; you could also write it like this:
D3DXMatrix rot;
D3DXMatrix trans;
D3DXMatrixRotationY(&rot, rotation);
D3DXMatrixTranslation(&trans, 0.0f, -1.0f, 0.0f);
D3DXMatrix world = rot * trans;

Changing From Perspective to Orthographic Matrix

I have the scene with one simple triangle. And i am using perspective projection. I have my MVP matrix set up (with the help of GLM) like this:
glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
glm::mat4 View = glm::lookAt(
glm::vec3(0,0,5), // Camera is at (0,0,5), in World Space
glm::vec3(0,0,0), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
glm::mat4 Model = glm::mat4(1.0f);
glm::mat4 MVP = Projection * View * Model;
And it all works ok, i can change the values of the camera and the triangle is still displayed properly.
But i want to use orthographic projection. And when i change the projection matrix to orthographic, it works unpredictable, i can't display the triangle, or i just see one small part of it in the corner of the screen. To use the orthographic projection, i do this:
glm::mat4 Projection = glm::ortho( 0.0f, 800.0f, 600.0f, 0.0f,-5.0f, 5.0f);
while i don't change anything in View and Model matrices. And i just doesn't work properly.
I just need a push in the right direction, am i doing something wrong? What am i missing, what should i do to properly set up orthographic projection?
PS i don't know if it's needed, but these are the coordinates of the triangle:
static const GLfloat g_triangle[] = {
-1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 2.0f, 0.0f,
};
Your triangle is about 1 unit large. If you use an orthographic projection matrix that is 800/600 units wide, it is natural that you triangle appears very small. Just decrease the bounds of the orthographic matrix and make sure that the triangle is inside this area (e.g. the first vertex is outside of the view, because its x-coordinate is less than 0).
Furthermore, make sure that your triangle is not erased by backface culling or z-clipping. Btw.. negative values for zNear are a bit unusual but should work for orthographic projections.