Rotating/Translating Object In OpenGL - c++

I'm currently using OpenGL(Glad, GLFW, GLM). I just started learning and can't find the correct way to take a translation and actually render it. For example, I want to rotate 1 degree every frame. I've seen tutorials on how to use GLM to make those translations but I can't seem to figure out how to take something like this: glm::mat4 translate = glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, 0.f)); and apply it to an object rendered like this:
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES,3, GL_UNSIGNED_INT, 0);
I'm really not sure what I'm not understanding and any help would be appreciated.
Edit: I know you can just change the vertices and rebind the array, but that seems like it would be pretty slow. Would that work?

I am assuming that you are somewhat familiar with the concept of the different coordinate systems and transformations like the projection, view and model matrix. If not, you should read up on them here https://learnopengl.com/Getting-started/Coordinate-Systems.
In short, the model matrix transforms the object as a whole in the world. In most cases it contains translation, rotation and scaling. It is important to note that the order of matrix multiplication is generally defined as
glm::mat4 model = translate * rotate * scale;
The view matrix is needed to get the view of the camera and the projection matrix adds perspective and is used to determine what is on the screen and will be rendered.
To apply the transformation to the object that you want to draw with your shader, you will need to load the matrix into the shader first.
glm::mat4 model = glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, 0.f));
unsigned int modelMatrixLoc = glGetUniformLocation(ourShader.ID, "model");
glUniformMatrix4fv(modelMatrixLoc , 1, GL_FALSE, glm::value_ptr(model));
Here, the model matrix would be loaded into the shader under the name "model". You can then use this matrix in your shader to transform the vertices on the GPU.
A simple shader would then look like this
#version 460 core
layout (location = 0) in vec3 position_in;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position_in, 1.0);
}
To ensure that your normal vectors do not get skewed, you can calculate another matrix
glm::mat3 model_normal = glm::transpose(glm::inverse(model));
If you were to add this to the shader, it would look something like this
#version 460 core
layout (location = 0) in vec3 position_in;
layout (location = 1) in vec3 normal_in;
uniform mat4 model;
uniform mat3 model_normal;
uniform mat4 view;
uniform mat4 projection;
out vec3 normal;
void main()
{
gl_Position = projection * view * model * vec4(position_in, 1.0);
normal = model_normal * normal_in;
}
Notice how we can now use a mat3 instead of a mat4? This is because we do not want to translate the normal vector and the translation part of the matrix is located in the fourth column, which we cut away here. This also means that it is important to set the last component of the 4d vector to 1 if we want to translate and to 0 if we do not.
Edit: I know you can just change the vertices and rebind the array, but that seems like it would be pretty slow. Would that work?
You can always edit your model if you want to change its looks. However, the transformations are going to be way slower on the CPU. Remember that the GPU is highly optimized for this kind of task. So I would advise to do most transformations on the GPU.
If you only need to change a part of the vertices of your object, updating the vertices will work better in most cases.
How would I constantly rotate as a function of time?
To rotate as a function of time, there are multiple approaches. To clarify, transformations that are applied to an object in the shader are not permanent and will be reset for the next draw call. If the rotation is performed with the same axis, it is a good idea to specify the angle somewhere and then calculate a single rotation (matrix) for that angle. To do this with GLFW, you can use
const double radians_per_second = ... ;
const glm::vec3 axis = ... ;
// rendering loop start
double time_seconds = glfwGetTime();
float angle = radians_per_second * time_seconds;
glm::mat4 rotation = glm::rotate(glm::mat4(1.f), angle, axis);
On the other hand, if the rotations are not performed on the same axis, you will have to multiply both rotation matrices together.
rotation = rotation * additional_rotation;
In both cases, you need to set the rotation for your model matrix like I explained above.
Also, if I wanted to make a square follow the mouse, would I have to rebuild the vertices every time the mouse moves?
No you do not need to do that. If you just want to move the square to the position of the mouse, you can use a translation. To get the mouse position in world space, it seems that you can use glm::unProject( ... );. I have not tried this yet, but it looks like it could solve your problem. You can take a look at it here
https://glm.g-truc.net/0.9.2/api/a00245.html#gac38d611231b15799a0c06c54ff1ede43.
If you need more information on this topic, you can take a look at this thread where it has already been answered
Using GLM's UnProject.
what's the GLFW/GLAD function to change the camera's position and rotation?
That is just the view matrix. Look here again.
I'm basically trying to create the FPS camera. I figured out movement but I can't figure out rotation.
You can take a look here learnopengl.com/Getting-started/Camera. Just scroll down until you see the section "Look around". I think that the explanation there is pretty good.
I already looked at that but was confused about one thing. If each rendered cube has view, perspective, and model, how would I change the camera's view, perspective, and model?
Ok, I think that I understand the misconception here. Not all 3 matrices are per object. The model matrix is per object, the view matrix per camera and the projection matrix per viewing mode (e.g. perspective with fov of 90° or orthogonal) so usually only once.

Related

Set projection mode in OpenGL 3?

In old OpenGL versions spatial projection (where objects become smaller with growing distance) could be enabled via a call to
glMatrixMode(GL_PROJECTION);
(at least as far as I remember this was the call to enable this mode).
In OpenGL 3 - when this slow stack-mode is no longer used - this function does not work any more.
So how can I have the same spatial effect here? What is the intended way for this?
You've completely misunderstood what glMatrixMode actually did. The old, legacy OpenGL fixed function pipeline kept a set of matrices around, which were all used indiscriminately when drawing stuff. The two most important matrices were:
the modelview matrix, which is used to describe the transformation from model local space into view space. View space is still kind of abstract, but it can be understood as the world transformed into the coordinate space of the "camera". Illumination calculations happened in that space.
the projection matrix, which is used to describe the transformation from view space into clip space. Clip space is an intermediary stage right before reaching device coordinates (there are few important details involved in this, but those are not important right now), which mostly involves applying the homogenous divide i.e. scaling the clip coordinate vector by the reciprocal of its w-component.
The fixed transformation pipeline always was
position_view := Modelview · position
do illumination calculations with position_view
position_clip := Projection · position_view
position_pre_ndc := position_clip · 1/position_clip.w
In legacy OpenGL the modelview and projection matrix are always there. glMatrixMode is a selector, which of the existing matrices are subject to the operations done by the matrix manipulation functions. One of these functions is glFrustum which generates and multiplies a perspective matrix, i.e. a matrix which will create a perspective effect through the homogenous divide.
So how can I have the same spatial effect here? What is the intended way for this?
You generate a perspective matrix of the desired properties, and use it to transform the vertex attribute you designate as model local position into clip space and submit that into the gl_Position output of the vertex shader. The usual way to do this is by passing in a modelview and a projection matrix as uniforms.
The most bare bones GLSL vertex shader doing that would be
#version 330
uniform mat4 modelview;
uniform mat4 projection;
in vec4 pos;
void main(){
gl_Position = projection * modelview * pos;
}
As for generating the projection matrix: All the popular computer graphics math libraries got you covered and have functions for that.

In openGL should model coordinates be calculated on my CPU or on the GPU with OpenGL calls?

I am currently trying to understand openGL. I have a good understanding of the math behind the matrices transformations.
So I want to write a small 3D application where I could render many, many vertices. There would be different objects, with each there own set of vertices and world coordinates.
To get the actual coordinates of my vertices, I need to multiply them by the transformation matrix corresponding to the position/rotation of my object.
Here is my problem, I don't understand in OpenGL how to do these transformation for all these vertices by the GPU. From my understanding it would be much faster, but I don't seem to understand how to do it.
Or should I calculate each of those coordinates with the CPU and draw the transformed vertices with openGL?
There's a couple of different ways to solve this problem, depending on your circumstances.
The major draw model that people use looks like this: (I'm not going to check the exact syntax, but I'm pretty sure this is correct)
//Host Code, draw loop
for(Drawable_Object const& object : objects) {
glm::mat4 mvp;
glm::projection = /*...*/;
glm::view = /*...*/;
glm::model = glm::translate(glm::mat4(1), object.position);//position might be vec3 or vec4
mvp = projection * view * model;
glUniformMatrix1fv(glGetUniformLocation(program, "mvp"), 1, false, glm::value_ptr(mvp));
object.draw();//glDrawArrays, glDrawElements, etc...
}
//GLSL Vertex Shader
layout(location=0) in vec3 vertex;
uniform mat4 mvp;
/*...*/
void main() {
gl_Position = mvp * vec4(vertex, 1);
/*...*/
}
In this model, the matrices are calculated on the host, and then applied on the GPU. This minimizes the amount of data that needs to be passed on the CPU<-->GPU bus (which, while not often a limitation in graphics, can be a consideration to keep in mind), and is generally the cleanest in terms of reading/parsing the code.
There's a variety of other techniques you can use (and, if you do instanced rendering, have to use), but for most applications, it's not necessary to deviate from this model in any significant way.

How to transform vertices in vertex shader to get a 3D billboard

I'm trying to implement vertex shader code to achieve the "billboard" behaviour on a given vertex mesh. What I want is to define the mesh normally (like a 3D object) and then have it always facing the camera. I also need it to always have the same size (screen-wise). This two "effects" should happen:
The only difference in my case is that instead of a 2-D bar, I want to have a 3D-object.
To do so, I'm trying to follow the alternative 3 in this tutorial (the same where the images are taken from), but I can't figure out many of the assumptions they made (probably due to my lack of experience in graphics and OpenGL).
My shader applies the common transformation stack to vertices, i.e.:
gl_Position = project * view * model * position;
Where position is the input attribute with the vertex location in world-space. I want to be able to apply model-transformations (such as translation, scale and rotation) to modify the orientation of the object with respect to the camera. I understand the concepts explained in the tutorial but I can't seem to understand ho to apply them in my case.
What I've tried is the following (extracted from this answer, and similar to the tutorial):
uniform vec4 billbrd_pos;
...
gl_Position = project * (view * model * billbrd_pos + vec4(position.xy, 0, 0));
But what I get is a shape the size of which is bigger when is closer to the camera, and smaller otherwise. Did I forgot something?
Is is possible to do this in the vertex shader?
uniform vec4 billbrd_pos;
...
vec4 view_pos = view * model * billbrd_pos;
float dist = -view_pos.z;
gl_Position = project * (view_pos + vec4(position.xy*dist,0,0));
That way the fragment depths are still correct (at billbrd_pos depth) and you don't have to keep track of the screen's aspect ratio (as the linked tutorial does). It's dependent on the projection matrix though.

Transformation to center 3D object in OpenGL

I want to center the 3D object on the screen and be able to rotate/scale it. While doing the rotation/scaling the object center is still at the same place. (similar to MeshLab presentation).
This is my vertex shader:
gl_Position = mvp * vec4(VertexPosition,1.0);
And here is my modelview matrix in client code:
mat4 mvp = glm::translate(glm::mat4(1), vec3(-centerx, -centery, -centerz));
mvp = glm::scale(view, vec3(0.5/zoom, 0.5/zoom, 0.5/zoom));
Centerx, centery etc is the center of the object. Zoom is max size of object (so that it appears between -1 and 1). How do I get to the correct transformation? Is there any other things I needed?
This is a box, where I colored it by vertex position.
I'm not very familiar with glm, but my guess would be that you have to update the matrix in the shader. You can do this with a function like glUniformMatrix*(mvp, ...).
When you manipulate the matrix or other variables that you want to use in a shader, you have to send this updates to your shader, otherwise it won't have any effect.

How to move from model space to orthographic view in OpenGL 3.0+?

I've recently completed some examples in OpenGL which resulted in me drawing some triangles in the space -1 >= x <= 1, -1 >= y <= 1. What do I need to do to move to an orthographic view? What transforms do I need to perform? I'm assuming the resulting transform will be set into the vertex shader as a uniform variable, then applied to any incoming vertices.
Is it wise to use pixels as the scale of the view (i.e. 1024x768) or to use logical units (pixels x1000 for example)?
What do I need to do to move to an orthographic view? What transforms do I need to perform?
You must apply a orthographic projection. Note that the identity transform you used so far is already orthographic.
It's a good idea to decompose the transformation process into projection and modelview transformations. Each is described by a 4×4 matrix, which, you a right, are passed as uniforms to the vertex shader.
The typical vertex shader layout looks like
#version 120 /* or something later */
attribute vec4 position;
uniform mat4 proj;
uniform mat4 mv;
varying vec4 vert_pos; // for later versions of GLSL replace 'varying' with 'out'
void main()
{
// order matters, matrix multiplication is not
vert_pos = proj * mv * position; commutative */
}
The projection matrix itself must be supplied by you. You can either look at older fixed function OpenGL specifications to see how they're implemented. Or you use some ready to use graphics math library, like →GLM or (self advertisement) →linmath.h
Update due to comment
The modelview transform is used to set the point of view and the placement of geometry drawn into the scene. In general the modelview usually differs for each model drawn. Modelview itself can be decomposed into model and view. The view is what some people set using some sort of "lookAt" function. And model is the geometry placement part.
The projection is kind of the "lens" of OpenGL and what's responsible for the "ortho" or "perspective" or whatever look.
Like stated above the specific projection to be used is user defined, but usually follows the typical transformation matrices like found in older OpenGL specifications or in graphics math libraries. Just look at some older specification of OpenGL (say OpenGL-1.5 or so) for the definition of ortho and frustum matrices (they can be found for the fixed functions glOrtho and glFrustum).