Transformation to center 3D object in OpenGL - opengl

I want to center the 3D object on the screen and be able to rotate/scale it. While doing the rotation/scaling the object center is still at the same place. (similar to MeshLab presentation).
This is my vertex shader:
gl_Position = mvp * vec4(VertexPosition,1.0);
And here is my modelview matrix in client code:
mat4 mvp = glm::translate(glm::mat4(1), vec3(-centerx, -centery, -centerz));
mvp = glm::scale(view, vec3(0.5/zoom, 0.5/zoom, 0.5/zoom));
Centerx, centery etc is the center of the object. Zoom is max size of object (so that it appears between -1 and 1). How do I get to the correct transformation? Is there any other things I needed?
This is a box, where I colored it by vertex position.

I'm not very familiar with glm, but my guess would be that you have to update the matrix in the shader. You can do this with a function like glUniformMatrix*(mvp, ...).
When you manipulate the matrix or other variables that you want to use in a shader, you have to send this updates to your shader, otherwise it won't have any effect.

Related

Rotating/Translating Object In OpenGL

I'm currently using OpenGL(Glad, GLFW, GLM). I just started learning and can't find the correct way to take a translation and actually render it. For example, I want to rotate 1 degree every frame. I've seen tutorials on how to use GLM to make those translations but I can't seem to figure out how to take something like this: glm::mat4 translate = glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, 0.f)); and apply it to an object rendered like this:
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES,3, GL_UNSIGNED_INT, 0);
I'm really not sure what I'm not understanding and any help would be appreciated.
Edit: I know you can just change the vertices and rebind the array, but that seems like it would be pretty slow. Would that work?
I am assuming that you are somewhat familiar with the concept of the different coordinate systems and transformations like the projection, view and model matrix. If not, you should read up on them here https://learnopengl.com/Getting-started/Coordinate-Systems.
In short, the model matrix transforms the object as a whole in the world. In most cases it contains translation, rotation and scaling. It is important to note that the order of matrix multiplication is generally defined as
glm::mat4 model = translate * rotate * scale;
The view matrix is needed to get the view of the camera and the projection matrix adds perspective and is used to determine what is on the screen and will be rendered.
To apply the transformation to the object that you want to draw with your shader, you will need to load the matrix into the shader first.
glm::mat4 model = glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, 0.f));
unsigned int modelMatrixLoc = glGetUniformLocation(ourShader.ID, "model");
glUniformMatrix4fv(modelMatrixLoc , 1, GL_FALSE, glm::value_ptr(model));
Here, the model matrix would be loaded into the shader under the name "model". You can then use this matrix in your shader to transform the vertices on the GPU.
A simple shader would then look like this
#version 460 core
layout (location = 0) in vec3 position_in;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position_in, 1.0);
}
To ensure that your normal vectors do not get skewed, you can calculate another matrix
glm::mat3 model_normal = glm::transpose(glm::inverse(model));
If you were to add this to the shader, it would look something like this
#version 460 core
layout (location = 0) in vec3 position_in;
layout (location = 1) in vec3 normal_in;
uniform mat4 model;
uniform mat3 model_normal;
uniform mat4 view;
uniform mat4 projection;
out vec3 normal;
void main()
{
gl_Position = projection * view * model * vec4(position_in, 1.0);
normal = model_normal * normal_in;
}
Notice how we can now use a mat3 instead of a mat4? This is because we do not want to translate the normal vector and the translation part of the matrix is located in the fourth column, which we cut away here. This also means that it is important to set the last component of the 4d vector to 1 if we want to translate and to 0 if we do not.
Edit: I know you can just change the vertices and rebind the array, but that seems like it would be pretty slow. Would that work?
You can always edit your model if you want to change its looks. However, the transformations are going to be way slower on the CPU. Remember that the GPU is highly optimized for this kind of task. So I would advise to do most transformations on the GPU.
If you only need to change a part of the vertices of your object, updating the vertices will work better in most cases.
How would I constantly rotate as a function of time?
To rotate as a function of time, there are multiple approaches. To clarify, transformations that are applied to an object in the shader are not permanent and will be reset for the next draw call. If the rotation is performed with the same axis, it is a good idea to specify the angle somewhere and then calculate a single rotation (matrix) for that angle. To do this with GLFW, you can use
const double radians_per_second = ... ;
const glm::vec3 axis = ... ;
// rendering loop start
double time_seconds = glfwGetTime();
float angle = radians_per_second * time_seconds;
glm::mat4 rotation = glm::rotate(glm::mat4(1.f), angle, axis);
On the other hand, if the rotations are not performed on the same axis, you will have to multiply both rotation matrices together.
rotation = rotation * additional_rotation;
In both cases, you need to set the rotation for your model matrix like I explained above.
Also, if I wanted to make a square follow the mouse, would I have to rebuild the vertices every time the mouse moves?
No you do not need to do that. If you just want to move the square to the position of the mouse, you can use a translation. To get the mouse position in world space, it seems that you can use glm::unProject( ... );. I have not tried this yet, but it looks like it could solve your problem. You can take a look at it here
https://glm.g-truc.net/0.9.2/api/a00245.html#gac38d611231b15799a0c06c54ff1ede43.
If you need more information on this topic, you can take a look at this thread where it has already been answered
Using GLM's UnProject.
what's the GLFW/GLAD function to change the camera's position and rotation?
That is just the view matrix. Look here again.
I'm basically trying to create the FPS camera. I figured out movement but I can't figure out rotation.
You can take a look here learnopengl.com/Getting-started/Camera. Just scroll down until you see the section "Look around". I think that the explanation there is pretty good.
I already looked at that but was confused about one thing. If each rendered cube has view, perspective, and model, how would I change the camera's view, perspective, and model?
Ok, I think that I understand the misconception here. Not all 3 matrices are per object. The model matrix is per object, the view matrix per camera and the projection matrix per viewing mode (e.g. perspective with fov of 90° or orthogonal) so usually only once.

Set projection mode in OpenGL 3?

In old OpenGL versions spatial projection (where objects become smaller with growing distance) could be enabled via a call to
glMatrixMode(GL_PROJECTION);
(at least as far as I remember this was the call to enable this mode).
In OpenGL 3 - when this slow stack-mode is no longer used - this function does not work any more.
So how can I have the same spatial effect here? What is the intended way for this?
You've completely misunderstood what glMatrixMode actually did. The old, legacy OpenGL fixed function pipeline kept a set of matrices around, which were all used indiscriminately when drawing stuff. The two most important matrices were:
the modelview matrix, which is used to describe the transformation from model local space into view space. View space is still kind of abstract, but it can be understood as the world transformed into the coordinate space of the "camera". Illumination calculations happened in that space.
the projection matrix, which is used to describe the transformation from view space into clip space. Clip space is an intermediary stage right before reaching device coordinates (there are few important details involved in this, but those are not important right now), which mostly involves applying the homogenous divide i.e. scaling the clip coordinate vector by the reciprocal of its w-component.
The fixed transformation pipeline always was
position_view := Modelview · position
do illumination calculations with position_view
position_clip := Projection · position_view
position_pre_ndc := position_clip · 1/position_clip.w
In legacy OpenGL the modelview and projection matrix are always there. glMatrixMode is a selector, which of the existing matrices are subject to the operations done by the matrix manipulation functions. One of these functions is glFrustum which generates and multiplies a perspective matrix, i.e. a matrix which will create a perspective effect through the homogenous divide.
So how can I have the same spatial effect here? What is the intended way for this?
You generate a perspective matrix of the desired properties, and use it to transform the vertex attribute you designate as model local position into clip space and submit that into the gl_Position output of the vertex shader. The usual way to do this is by passing in a modelview and a projection matrix as uniforms.
The most bare bones GLSL vertex shader doing that would be
#version 330
uniform mat4 modelview;
uniform mat4 projection;
in vec4 pos;
void main(){
gl_Position = projection * modelview * pos;
}
As for generating the projection matrix: All the popular computer graphics math libraries got you covered and have functions for that.

GLSL vertex shader gl_Position value

I'm creating game that uses orthogonal view(2D). I'm trying to understand the value of gl_Position in vertex shader.
From what I understand x and y coordinates translate to screen position in range of -1 to 1, but I'm quite confused with role of the z and w, I only know that the w value should be set to 1.0
For the moment I just use gl_Position.xyw = vec3(Position, 1.0);, where Position is 2D vertex position
I use OpenGL 3.2.
Remember that openGL must also work for 3D and it's easier to expose the 3D details than to create a new interface for just 2D.
The Z component is to set the depth of the vertex, points outside -1,1 (after perspective divide) will not be drawn and for the values between -1,1 it will be checked against a depth buffer to see if the fragment is behind some previously drawn triangle and not draw it if it should be hidden.
The w component is for a perspective divide and allowing the GPU to interpolate the values in a perspective correct way. Otherwise the textures looks weird.

How to translate the projected object in screen in opengl

I've rendered an 3d object and its 2d projection in the image is correct. However now I want to shift the 2d projected object by some pixels. How do I achieve that?
Note that simply translating the 3d object doesn't work because under perspective projection the 2d projected object could change. My goal is to just shift the 2d object in the image without changing its shape and size.
If you're using the programmable pipeline, you can apply the translation after you applied the projection transformation.
The only thing you have to be careful about is that the transformed coordinates after applying the projection matrix have a w coordinate that will be used for the perspective division. To make the additional translation amount constant in screen space, you'll have to multiply it by w. The key fragments of the vertex shader would look like this:
in vec4 Position;
uniform mat4 ModelViewProjMat;
uniform vec2 TranslationOffset;
void main() {
gl_Position = ModelViewProjMat * Position;
gl_Position.xy += TranslationOffset * gl_Position.w;
}
After the perspective division by w, this will result in a fixed offset.
Another possibility that works with both the programmable and fixed pipeline is that you shift the viewport. Say if the window size is vpWidth times vpHeight, and the offset you want to apply is (xOffset, yOffset), you can set the viewport to:
glViewport(xOffset, yOffset, vpWidth + xOffset, vpHeight + yOffset);
One caveat here is that the geometry will still be clipped by the same view volume, but only be shifted by the viewport transform after clipping was applied. If the geometry would fit completely inside the original viewport, this will work fine. But if the geometry would have been clipped originally, it will still be clipped with the same planes, even though it might actually be inside the window after the shift is applied.
As an addition to Reto Koradi's answer: You don't need shaders and you don't need to modify the viewport you use (which has the clipping issues mentioned in the answer). You can simply modifiy the projection matrix by pre-multiplying some translation (which in effect will be applied last, after the projective transformation):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranstlate(x,y,z); // <- this one was added
glFrustum(...) or gluPerspective(...) or whatever you use
glFrustum and gluPerspective will multiply the current matrix with the projective transfrom matrix they build, that is why one typically loads identity first. However, it doesn't necessarily have to be identity, and this case is one of the rare cases where one should load something else.
Since you want to shift in pixels, but that transformation is applied in clip space, you need some unit conversions. Since the clip space is just the homogenous representation of the normalized device space, where the frustum is [-1,1] in all 3 dimensions (so the viewport is 2x2 units big in that space), you can use the following:
glTranslate(x * 2.0f/viewport_width, y * 2.0f/viewport_height, 0.0f);
to shift the output by (x,y) pixels.
Note that while I wrote this for fixed-function GL, the math will of course work with shaders as well, and you can simply modify the projection matrix used by the shader in the same way.

How to transform back-facing vertices in GLSL when creating a shadow volume

I'm writing a game using OpenGL and I am trying to implement shadow volumes.
I want to construct the shadow volume of a model on the GPU via a vertex shader. To that end, I represent the model with a VBO where:
Vertices are duplicated such that each triangle has its own unique three vertices
Each vertex has the normal of its triangle
For reasons I'm not going to get into, I was actually doing the above two points anyway, so I'm not too worried about the vertex duplication
Degenerate triangles are added to form quads inside the edges between each pair of "regular" triangles
Using this model format, inside the vertex shader I am able to find vertices that are part of triangles that face away from the light and move them back to form the shadow volume.
What I have left to figure out is what transformation exactly I should apply to the back-facing vertices.
I am able to detect when a vertex is facing away from the light, but I am unsure what transformation I should apply to it. This is what my vertex shader looks like so far:
uniform vec3 lightDir; // Parallel light.
// On the CPU this is represented in world
// space. After setting the camera with
// gluLookAt, the light vector is multiplied by
// the inverse of the modelview matrix to get
// it into eye space (I think that's what I'm
// working in :P ) before getting passed to
// this shader.
void main()
{
vec3 eyeNormal = normalize(gl_NormalMatrix * gl_Normal);
vec3 realLightDir = normalize(lightDir);
float dotprod = dot(eyeNormal, realLightDir);
if (dotprod <= 0.0)
{
// Facing away from the light
// Need to translate the vertex along the light vector to
// stretch the model into a shadow volume
//---------------------------------//
// This is where I'm getting stuck //
//---------------------------------//
// All I know is that I'll probably turn realLightDir into a
// vec4
gl_Position = ???;
}
else
{
gl_Position = ftransform();
}
}
I've tried simply setting gl_position to ftransform() - (vec4(realLightDir, 1.0) * someConstant), but this caused some kind of depth-testing bugs (some faces seemed to be visible behind others when I rendered the volume with colour) and someConstant didn't seem to affect how far the back-faces are extended.
Update - Jan. 22
Just wanted to clear up questions about what space I'm probably in. I must say that keeping track of what space I'm in is the greatest source of my shader headaches.
When rendering the scene, I first set up the camera using gluLookAt. The camera may be fixed or it may move around; it should not matter. I then use translation functions like glTranslated to position my model(s).
In the program (i.e. on the CPU) I represent the light vector in world space (three floats). I've found during development that to get this light vector in the right space of my shader I had to multiply it by the inverse of the modelview matrix after setting the camera and before positioning the models. So, my program code is like this:
Position camera (gluLookAt)
Take light vector, which is in world space, and multiply it by the inverse of the current modelview matrix and pass it to the shader
Transformations to position models
Drawing of models
Does this make anything clearer?
the ftransform result is in clip-space. So this is not the space you want to apply realLightDir in. I'm not sure which space your light is in (your comment confuses me), but what is sure is that you want to add vectors that are in the same space.
On the CPU this is represented in world
space. After setting the camera with
gluLookAt, the light vector is multiplied by
the inverse of the modelview matrix to get
it into eye space (I think that's what I'm
working in :P ) before getting passed to
this shader.
multiplying a vector by the inverse of the mv matrix brings the vector from view space to model space. so you're saying your light-vector (in world space), is applied a transform that does view->model. It makes little sense to me.
We have 4 spaces:
model space: the space where your gl_Vertex is defined in.
world space: a space that GL does not care about in general, that represents an arbitrary space to locate the models in. It's usually what the 3d engine works in (it maps to our general understanding of world coordinates).
view space: a space that corresponds to the referencial of the viewer. 0,0,0 is where the viewer is, looking down Z. Obtained by multiplying gl_Vertex by the modelview
clip space: the magic space that the matrix projection brings us in. result of ftransform is in this space (so is gl_ModelViewProjectionMatrix * gl_Vertex )
Can you clarify exactly which space your light direction is in ?
What you need to do, however, is make the light vector addition in either model, world or view space: Bring all the bits of your operation in the same space. E.g. for model space, just compute the light direction in model space on CPU, and do a:
vec3 vertexTemp = gl_Vertex + lightDirInModelSpace * someConst
then you can bring that new vertex position in clip space with
gl_Position = gl_ModelViewProjectionMatrix * vertexTemp
Last bit, don't try to apply vector additions in clip-space. It won't generally do what you think it should do, as at that point you are necessarily dealing with homogeneous coordinates with non-uniform w.