I am trying to rotate the texture image using glm,but the output looks stretched or not in the properly rotated along z axis. What might be the possible solution for this one.
float imgAspectRatio = imageWidth / (float) imageHeight;
float viewAspectRatio = viewWidth / (float) viewHeight;
if (imgAspectRatio > viewAspectRatio) {
yScale = viewAspectRatio / imgAspectRatio;
} else {
xScale = imgAspectRatio / viewAspectRatio;
}
glm::mat4 model(1.0f);
model = glm::translate(model, glm::vec3(0, 0, 0));
model = glm::rotate(model, glm::radians(angle),
glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScale, yScale, 1.0f));
Some suggested multiply with glm::ortho with but its giving square shape
glm::mat4 projection(1.0f);
projection = glm::ortho(
-imgAspectRatio,
imgAspectRatio,
-1.0f,
1.0f,
-1.0f,
1.0f
);
The code that you currently have would actually work if you had a square window.
By combining the image aspect ratio scaling and the view aspect ratio scaling into a single step, the order of operations you currently have is essentially as follows:
Scale image plane by image aspect ratio.
Scale all xy coordinates in your scene by the view aspect ratio.
Rotate image plane.
Translate image plane.
However, you really want to scale by the view aspect ratio at the very end. An easy exercise to help visualize why: imagine you have a perfectly square window in which you render a triangle with xy coordinates (-1, -1), (1, -1), (0, 1). Now suppose your window doubles in height, but you want the shape to stay the same: obviously, you just multiply each y-coordinate by 1/2. Now suppose you want to rotate the triangle by 90 degrees ccw, but again still keep the shape the same. Do you rotate first and then scale by the view aspect ratio? Or vice versa? By running through both options, it becomes clear that you have to scale by the view aspect ratio very last.
In other words, you need the following order:
Scale image plane by image aspect ratio.
Rotate image plane.
Translate image plane.
Scale all xy coordinates in your scene by the view aspect ratio.
Which is achieved by code that looks something like this:
float xScaleImg = 1.0f;
float yScaleImg = xScaleImg / imgAspectRatio;
float xScaleView = 1.0f;
float yScaleView = viewAspectRatio;
glm::mat4 model(1.0f);
model = glm::scale(model, glm::vec3(xScaleView, yScaleView, 1.0f));
model = glm::translate(model, glm::vec3(0, 0, 0));
model = glm::rotate(model, glm::radians(angle), glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScaleImg, yScaleImg, 1.0f));
The first four lines assume, as is the case in your example, that both the view and image widths are greater than the respective heights. You'll probably want to add some logic to change this around in the event that the reverse is true.
Update:
Depending on the effect you want to achieve for translation, you may want to move the glm::translate(...) command up one line. The order I gave in my original answer keeps the units of translation equal in pixels. E.g. if you pass in glm::vec3(1.0f, 1.0f, 0.0f), and the width of the window is, say, 1280 pixels, then the image will be translated 640 pixels to the left and 640 pixels upwards.
However, you may want to keep the OpenGL [-1, 1] range on both axes. That is, when you pass in glm::vec3(1.0f, 1.0f, 0.0f) for the translation, you may want the image to be translated right by half of the window's width and up by half of the window's height. In that case, you need to make the translation the last operation performed on the image, and this is done by making the glm::translate(...) the first line of code. This is what Nile Qor wanted. In this case, the last lines of the code become something like:
glm::mat4 model(1.0f);
model = glm::translate(model, glm::vec3(1.0f, 1.0f, 0));
model = glm::scale(model, glm::vec3(xScaleView, yScaleView, 1.0f));
model = glm::rotate(model, glm::radians(angle), glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScaleImg, yScaleImg, 1.0f));
For more context, you can see the discussion in the comments section of this answer.
Related
I have what I believed to be a basic need: from "2D position of the mouse on the screen", I need to get "the closest 3D point in the 3D world". Looks like ray-tracing common problematic (even if it's not mine).
I googled / read a lot: looks like the topic is messy and lots of things gets unfortunately quickly intricated. My initial problem / need involves lots of 3D points what I do not know (meshes or point cloud from the internet), so, it's impossible to understand what result you should expect! Thus, I decided to create simple shapes (triangle, quadrangle, cube) with points that I know (each coord of each point is 0.f or 0.5f in local frame), and, try to see if I can "recover" 3D point positions from the mouse cursor when I move it on the screen.
Note: all coord of all points of all shapes are known values like 0.f or 0.5f. For example, with the triangle:
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
What I do
I have a 3D OpenGL renderer where I added a GUI to have controls on the rendered scene
Transformations: tx, ty, tz, rx, ry, rz are controls that enables to change the model matrix. In code
// create transformations: model represents local to world transformation
model = glm::mat4(1.0f); // initialize matrix to identity matrix first
model = glm::translate(model, glm::vec3(tx, ty, tz));
model = glm::rotate(model, glm::radians(rx), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, glm::radians(ry), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, glm::radians(rz), glm::vec3(0.0f, 0.0f, 1.0f));
ourShader.setMat4("model", model);
model changes only the position of the shape in the world and has no connection with the position of the camera (that's what I understand from tutorials).
Camera: from here, I ended-up with a camera class that holds view and proj matrices. In code
// get view and projection from camera
view = cam.getViewMatrix();
ourShader.setMat4("view", view);
proj = cam.getProjMatrix((float)SCR_WIDTH, (float)SCR_HEIGHT, near, 100.f);
ourShader.setMat4("proj", proj);
The camera is a fly-like camera that can be moved when moving the mouse or using keyboard arrows and that does not act on model, but only on view and proj (that's what I understand from tutorials).
The shader then uses model, view and proj this way:
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
void main()
{
// note that we read the multiplication from right to left
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
Screen to world: as using glm::unProject didn't always returned results I expected, I added a control to not use it (back-projecting by-hand). In code, first I get the cursor mouse position frame3DPos following this
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouseCursorCallback(GLFWwindow* window, double xposIn, double yposIn)
{
// screen to world transformation
xposScreen = xposIn;
yposScreen = yposIn;
int windowWidth = 0, windowHeight = 0; // size in screen coordinates.
glfwGetWindowSize(window, &windowWidth, &windowHeight);
int frameWidth = 0, frameHeight = 0; // size in pixel.
glfwGetFramebufferSize(window, &frameWidth, &frameHeight);
glm::vec2 frameWinRatio = glm::vec2(frameWidth, frameHeight) /
glm::vec2(windowWidth, windowHeight);
glm::vec2 screen2DPos = glm::vec2(xposScreen, yposScreen);
glm::vec2 frame2DPos = screen2DPos * frameWinRatio; // window / frame sizes may be different.
frame2DPos = frame2DPos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention.
glm::vec3 frame3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
frame3DPos.x = frame2DPos.x;
frame3DPos.y = frameHeight - 1.0f - frame2DPos.y; // GL's window origin is at the bottom left
frame3DPos.z = 0.f;
glReadPixels((GLint) frame3DPos.x, (GLint) frame3DPos.y, // CAUTION: cast to GLint.
1, 1, GL_DEPTH_COMPONENT,
GL_FLOAT, &zbufScreen); // CAUTION: GL_DOUBLE is NOT supported.
frame3DPos.z = zbufScreen; // z-buffer.
And then I can call glm::unProject or not (back-projecting by-hand) according to controls in GUI
glm::vec3 world3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
if (screen2WorldUsingGLM) {
glm::vec4 viewport(0.0f, 0.0f, (float) frameWidth, (float) frameHeight);
world3DPos = glm::unProject(frame3DPos, view * model, proj, viewport);
} else {
glm::mat4 trans = proj * view * model;
glm::vec4 frame4DPos(frame3DPos, 1.f);
frame4DPos = glm::inverse(trans) * frame4DPos;
world3DPos.x = frame4DPos.x / frame4DPos.w;
world3DPos.y = frame4DPos.y / frame4DPos.w;
world3DPos.z = frame4DPos.z / frame4DPos.w;
}
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
Z-buffering is always allowed whatever the shape is 2D (triangle, quadrangle) or 3D (cube). In code
glEnable(GL_DEPTH_TEST); // Enable z-buffer.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // also clear the z-buffer
In picture I get
The camera is positioned at (0., 0., 0.) and looks "ahead" (front = -z as z-axis is positive from screen to me). The shape is positioned (using tx, ty, tz, rx, ry, rz) "in front of the camera" with tz = -5 (5 units following the front vector of the camera)
What I get
Triangle in initial setting
I have correct xpos and ypos in world frame but incorrect zpos = 0. (z-buffering is allowed). I expected zpos = -5 (as tz = -5).
Question: why zpos is incorrect?
If I do not use glm::unProject, I get outer space results
Question: why "back-projecting" by-hand doesn't return consistent results compared to glm::unProject? Is this logical? Arethey different operations? (I believed they should be equivalent but they are obviously not)
Triangle moved with translation
After translation of about tx = 0.5 I still get same coordinates (local frame) where I expected to have previous coord translated along x-axis. Not using glm::unProject returns oute-space results here too...
Question: why translation (applied by model - not view nor proj) is ignored?
Cube in initial setting
I get correct xpos, ypos and zpos?!... So why is this not working the same way with the "2D" triangle (which is "3D" one to me, so, they should behave the same)?
Cube moved with translation
Translated along ty this time seems to have no effect (still get same coordinates - local frame).
Question: like with triangle, why translation is ignored?
What I'd like to get
The main question is why the model transformation is ignored? If this is to be expected, I'd like to understand why.
If there's a way to recover the "true" position of the shape in the world (including model transformation) from the position of the mouse cursor, I'd like to understand how.
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
As I am new to OpenGL, I didn't get that object coordinates from glm::unProject doc is another way to refer to local space. Solution: pass view*model to glm::unProject and apply model again, or, pass view to glm::unProject as explained here: Screen Coordinates to World Coordinates.
This fixes all weird behaviors I observed.
I am trying to learn OpenGL by coding some stuff, but am still not able to understand the concept of rotation.
Here is my code:
glm::mat4 projection1 = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f);
glm::mat4 view1 =camera.GetViewMatrix();//// //
ourShader.setMat4("projection", projection1);
ourShader.setMat4("view", view1);
ourShader.setInt ("pass1",1);
glm::mat4 model1 = glm::mat4(1.0f);
vangle+=0.1;
float cvangle = (vangle-90)*PI /180;
model1=glm::translate (model1 ,glm::vec3(cos(cvangle )*50,0,sin(cvangle )*50));
model1 = glm::scale(model1, glm::vec3(1,1, 1));
model1 = glm::rotate(model1,3.0f , glm::vec3(1, 0, 0));
model1 = glm::rotate(model1,2.0f, glm::vec3(0, 1, 0));
ourShader.setMat4("model", model1);
ourModel.Draw(ourShader);
The helicopter should rotate around the camera, but my problem is that the rotation has a different effect in each angle, i.e. at angle 0, it looks like this:
while at angle 90, it looks like this:
My goal is to rotate the helicopter around the camera showing always the same side.
Any help is appreciated.
If you want to rotate an object in place, then you've to dot the rotation before the translation:
model = translate * rotate;
If you want to rotate around a point, then you've to translate the object (by the rotation radius) and then rotate the translated object:
model = rotate * translate
Note, the operations like rotate, scale and translate, define a new matrix and multiply the input matrix by the new matrix.
So In your case the translate has to be done after a rotation (rotate) around the z axis:
vangle+=0.1;
glm::mat4 model1 = glm::mat4(1.0f);
model1 = glm::rotate(model1, glm::radians(vangle), glm::vec3(0, 0, 1));
model1 = glm::translate(model1, glm::vec3(50.0f, 0.0f, 0.0f);
model1 = glm::scale(model1, glm::vec3(1, 1, 1));
How do I determine what transforms I need to make a square fill an entire window in modern OpenGL. Say for example I have an 800 x 600 window and the coordinates with the vertices of two triangles extending from -1 and 1. Without any type of transformation, these coordinated would fill an 800 x 600 window because OpenGL's coordinates extend from -1 to 1. What if I want to use a standard MVP transformation, though? How do I determine what needs to be done in order to fill an entire window. Consider this code:
glm::mat4 projectionMatrix = glm::perspective(60.0f, 4.f/3.f, 0.1f, 100.0f); // gluPerspective equivalent for those who may not know about glm
glm::mat4 viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -5.0f));;
glm::mat4 modelMatrix = glm::mat4(1.0f);
with the same coordinates. I would now get a square somewhere in the middle of the window. Assuming I do not change the projection matrix, what changes would need to be made to the View and Model matrices? I understand the matrix math, but not how it relates to window coordinates themselves.
Could anyone help me understand this?
For 2D elements (like a HUD) I generally set an ortographic matrix with left = 0, right = width, bottom = 0, top = height where width and height are the size of the window. znear = -1 and zfar = 1.
Then your rectangle vertices would be at 0, 0, width and height.
Is that what you want? I'm not sure why you want to use a perspective matrix for a rectangle that fills the screen.
I'm despairing of the task to zoom in on the current mouse position in OpenGL. I've tried a lot of different things and read other posts on this, but I couldn't adapt the possible solutions to my specific problem. So as far as I understood it, you'll have to get the current window coordinates of the mouse curser, then unproject them to get world coordinates and finally translate to those world coordinates.
To find the current mouse positions, I use the following code in my GLUT mouse callback function every time the right mouse button is clicked.
if(button == 2)
{
mouse_current_x = x;
mouse_current_y = y;
...
Next up, I unproject the current mouse positions in my display function before setting up the ModelView and Projection matrices, which also seems to work perfectly fine:
// Unproject Window Coordinates
float mouse_current_z;
glReadPixels(mouse_current_x, mouse_current_y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &mouse_current_z);
glm::vec3 windowCoordinates = glm::vec3(mouse_current_x, mouse_current_y, mouse_current_z);
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, (float)width, (float)height);
glm::vec3 worldCoordinates = glm::unProject(windowCoordinates, modelViewMatrix, projectionMatrix, viewport);
printf("(%f, %f, %f)\n", worldCoordinates.x, worldCoordinates.y, worldCoordinates.z);
Now the translation is where the trouble starts. Currently I'm drawing a cube with dimensions (dimensionX, dimensionY, dimensionZ) and translate to the center of that cube, so my zooming happens to the center point as well. I'm achieving zooming by translating in z-direction (dolly):
// Set ModelViewMatrix
modelViewMatrix = glm::mat4(1.0); // Start with the identity as the transformation matrix
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(0.0, 0.0, -translate_z)); // Zoom in or out by translating in z-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_x, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the whole szene in x-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_y, glm::vec3(0.0f, 1.0f, 0.0f)); // Rotate the whole szene in y-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, -90.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the camera by 90 degrees in negative x-direction to get a frontal look on the szene
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(-dimensionX/2.0f, -dimensionY/2.0f, -dimensionZ/2.0f)); // Translate the origin to be the center of the cube
glBindBuffer(GL_UNIFORM_BUFFER, globalMatricesUBO);
glBufferSubData(GL_UNIFORM_BUFFER, sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
I tried to replace the translation to the center of the cube by translating to the worldCoordinates vector, but this didn't work. I also tried to scale the vector by width or height.
Am I missing out on some essential step here?
Maybe this won't work in your case. But to me this seems like the best way to handle this. Use glulookat() to look at the xyz position of the mouse click that you have already found. Then change the gluPerspective() to a smaller angle of view to achieve the actual zoom.
I am working on rendering a terrain in OpenGL.
My code is the following:
void Render_Terrain(int k)
{
GLfloat angle = (GLfloat) (k/40 % 360);
//PROJECTION
glm::mat4 Projection = glm::perspective(45.0f, 1.0f, 0.1f, 100.0f);
//VIEW
glm::mat4 View = glm::mat4(1.);
//ROTATION
//View = glm::rotate(View, angle * -0.1f, glm::vec3(1.f, 0.f, 0.f));
//View = glm::rotate(View, angle * 0.2f, glm::vec3(0.f, 1.f, 0.f));
//View = glm::rotate(View, angle * 0.9f, glm::vec3(0.f, 0.f, 1.f));
View = glm::translate(View, glm::vec3(0.f,0.f, -4.0f)); // x, y, z position ?
//MODEL
glm::mat4 Model = glm::mat4(1.0);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MVP_matrix"), 1, GL_FALSE, glm::value_ptr(MVP));
//Transfer additional information to the vertex shader
glm::mat4 MV = Model * View;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MV_matrix"), 1, GL_FALSE, glm::value_ptr(MV));
glClearColor(0.0, 0.0, 0.0, 1.0);
glDrawArrays(GL_LINE_STRIP, terrain_start, terrain_end );
}
I can do a rotation around the X,Y,Z axis, scale my terrain but I can't find a way to move the camera. I am using OpenGL 3+ and I am kinda new to graphics.
The best way to move the camera would be through the use of gluLookAt(), it simulates camera movement since the camera cannot be moved whatsoever. The function takes 9 parameters. The first 3 are the XYZ coordinates of the eye which is where the camera is exactly located. The second 3 parameters are the XYZ coordinates of the center which is the point the camera is looking at from the eye. It is always going to be the center of the screen. The third 3 parameters are the XYZ coordinates of the UP vector which points vertically upwards from the eye. Through manipulating those 3 XYZ coordinates you can simulate any camera movement you want.
Check out this link.
Further details:
-If you want for example to rotate around an object you rotate your eye around the up vector.
-If you want to move forward or backwards you add or subtract to the eye as well as the center points.
-If you want to tilt the camera left or right you rotate your up vector around your look vector where your look vector is center - eye.
gluLookAt operates on the deprecated fixed function pipeline, so you should use glm::lookAt instead.
You are currently using a constant vector for translation. In the commented out code (which I assume you were using to test rotation), you use angle to adjust the rotation. You should have a similar variable for translation. Then, you can change the glm::translate call to:
View = glm::translate(View, glm::vec3(x_transform, y_transform, z_transform)); // x, y, z position ?
and get translation.
You should probably pass in more than one parameter into Render_Terrain, as translation and rotation need at least six parameters.
In OpenGL the camera is always at (0, 0, 0). You need to set the matrix mode to GL_MODELVIEW, and then modify or set the model/view matrix using things like glTranslate, glRotate, glLoadMatrix, etc. in order to make it appear that the camera has moved. If you're using GLU, you can use gluLookAt to point the camera in a particular direction.