We are using a semicircle as a png image and we are rotating to fit into the circle border we are using opengl rotate API we are not getting the proper solution,can any specify what is the reason.
glm::mat4 transform;
transform = glm::rotate(transform, glm::radians(-60.0f), glm::vec3(m_fAltXPos, m_fAltYPos, 0.0f));
GLuint transformLoc = glGetUniformLocation(m_uiProgram, "transform");
glUniformMatrix4fv(transformLoc, 1, GL_FALSE, glm::value_ptr(transform));
if (m_ucType == VTD_INTRUDER_RA)
glBindTexture(GL_TEXTURE_2D, m_uiTextures[VTD_INT_TEX_ORA]);
else
glBindTexture(GL_TEXTURE_2D, m_uiTextures[VTD_INT_TEX_OTA]);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
where m_uiTextures is used for loading texture.
From my guessing , you are drawing a quad on a 2d surface with no "to world coordinates" transformation , but you still rotate in 3d and its not good.
glm::rotate expects you to provide a matrix to store the rotation , a rotation angle and around which axis to rotate.
if you want to rotate around the x axis:
transform = glm::rotate(transform,angle,glm::vec3(1.0f,0.0f,0.0f);
or maybe you want to rotate around y and z axis
transform = glm::rotate(transform,angle,glm::vec3(0.0f,1.0f,1.0f)'
Change
transform = glm::rotate(transform, glm::radians(-60.0f), glm::vec3(m_fAltXPos, m_fAltYPos, 0.0f));
to
transform = glm::rotate(transform, glm::radians(-60.0f), glm::vec3(0.0f, 0.0f, 1.0f));
just because you work in 2D space and you won't benefit from rotations around x or y axis.
Related
I am trying to rotate the texture image using glm,but the output looks stretched or not in the properly rotated along z axis. What might be the possible solution for this one.
float imgAspectRatio = imageWidth / (float) imageHeight;
float viewAspectRatio = viewWidth / (float) viewHeight;
if (imgAspectRatio > viewAspectRatio) {
yScale = viewAspectRatio / imgAspectRatio;
} else {
xScale = imgAspectRatio / viewAspectRatio;
}
glm::mat4 model(1.0f);
model = glm::translate(model, glm::vec3(0, 0, 0));
model = glm::rotate(model, glm::radians(angle),
glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScale, yScale, 1.0f));
Some suggested multiply with glm::ortho with but its giving square shape
glm::mat4 projection(1.0f);
projection = glm::ortho(
-imgAspectRatio,
imgAspectRatio,
-1.0f,
1.0f,
-1.0f,
1.0f
);
The code that you currently have would actually work if you had a square window.
By combining the image aspect ratio scaling and the view aspect ratio scaling into a single step, the order of operations you currently have is essentially as follows:
Scale image plane by image aspect ratio.
Scale all xy coordinates in your scene by the view aspect ratio.
Rotate image plane.
Translate image plane.
However, you really want to scale by the view aspect ratio at the very end. An easy exercise to help visualize why: imagine you have a perfectly square window in which you render a triangle with xy coordinates (-1, -1), (1, -1), (0, 1). Now suppose your window doubles in height, but you want the shape to stay the same: obviously, you just multiply each y-coordinate by 1/2. Now suppose you want to rotate the triangle by 90 degrees ccw, but again still keep the shape the same. Do you rotate first and then scale by the view aspect ratio? Or vice versa? By running through both options, it becomes clear that you have to scale by the view aspect ratio very last.
In other words, you need the following order:
Scale image plane by image aspect ratio.
Rotate image plane.
Translate image plane.
Scale all xy coordinates in your scene by the view aspect ratio.
Which is achieved by code that looks something like this:
float xScaleImg = 1.0f;
float yScaleImg = xScaleImg / imgAspectRatio;
float xScaleView = 1.0f;
float yScaleView = viewAspectRatio;
glm::mat4 model(1.0f);
model = glm::scale(model, glm::vec3(xScaleView, yScaleView, 1.0f));
model = glm::translate(model, glm::vec3(0, 0, 0));
model = glm::rotate(model, glm::radians(angle), glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScaleImg, yScaleImg, 1.0f));
The first four lines assume, as is the case in your example, that both the view and image widths are greater than the respective heights. You'll probably want to add some logic to change this around in the event that the reverse is true.
Update:
Depending on the effect you want to achieve for translation, you may want to move the glm::translate(...) command up one line. The order I gave in my original answer keeps the units of translation equal in pixels. E.g. if you pass in glm::vec3(1.0f, 1.0f, 0.0f), and the width of the window is, say, 1280 pixels, then the image will be translated 640 pixels to the left and 640 pixels upwards.
However, you may want to keep the OpenGL [-1, 1] range on both axes. That is, when you pass in glm::vec3(1.0f, 1.0f, 0.0f) for the translation, you may want the image to be translated right by half of the window's width and up by half of the window's height. In that case, you need to make the translation the last operation performed on the image, and this is done by making the glm::translate(...) the first line of code. This is what Nile Qor wanted. In this case, the last lines of the code become something like:
glm::mat4 model(1.0f);
model = glm::translate(model, glm::vec3(1.0f, 1.0f, 0));
model = glm::scale(model, glm::vec3(xScaleView, yScaleView, 1.0f));
model = glm::rotate(model, glm::radians(angle), glm::vec3(0, 0, 1));
model = glm::scale(model, glm::vec3(xScaleImg, yScaleImg, 1.0f));
For more context, you can see the discussion in the comments section of this answer.
For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);
I'm writing a small 2D game-engine (educative purpose) in C++ and OpenGL 3.3, while writing the code I noted that almost all sprites (if not all) use the same vertexBuffer values:
const float vertexBuffer[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f
}
That is 2 triangles (if using VBO indexing) in model space that form a square, the indexBuffer goes like:
const unsigned short indexBuffer[] = { 0, 1, 2, 2, 0, 3 }
Why I'm using the same model-space values for all my sprites? Well I use a different MVP matrix for all of them:
P (projection): The orthogonal camera transform, usually with the same width and height of the glContext.
V (view): A lookAt transformation, it just sits in the z axis looking to the xy plane perpendicullary. This is also used to move the camera (follow the player, etc).
M (model): this matrix is created using transformations belonging to each sprite:
glm::mat4 model = <translate> * <rotate> * <scale>
Where:
<translate> is the position of the sprite in screen-space
<rotate> the rotation of the sprite
<scale> The size of the sprite in pixels divided by 2. Why? Each corner of the model-space corresponds to a vertex, and the square formed by these with its center in the origin, so if our sprite is 250x250 pixels, we scale by 125px to each side in each axis, thus transforming our model-space square to a screen-space square.
So, if I have 5 sprites I'll call glDrawElements 5 times, with differents MVPs and Textures each time, but same vertexBuffer, indexBuffer and uvCoordinates.
Do you think this is a error-prone approach for using in the future? Or should I instead apply the <translate> and <scale> transformations directly to the vertices when creating them? And leave the Model matrix with only the rotation.
I'm despairing of the task to zoom in on the current mouse position in OpenGL. I've tried a lot of different things and read other posts on this, but I couldn't adapt the possible solutions to my specific problem. So as far as I understood it, you'll have to get the current window coordinates of the mouse curser, then unproject them to get world coordinates and finally translate to those world coordinates.
To find the current mouse positions, I use the following code in my GLUT mouse callback function every time the right mouse button is clicked.
if(button == 2)
{
mouse_current_x = x;
mouse_current_y = y;
...
Next up, I unproject the current mouse positions in my display function before setting up the ModelView and Projection matrices, which also seems to work perfectly fine:
// Unproject Window Coordinates
float mouse_current_z;
glReadPixels(mouse_current_x, mouse_current_y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &mouse_current_z);
glm::vec3 windowCoordinates = glm::vec3(mouse_current_x, mouse_current_y, mouse_current_z);
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, (float)width, (float)height);
glm::vec3 worldCoordinates = glm::unProject(windowCoordinates, modelViewMatrix, projectionMatrix, viewport);
printf("(%f, %f, %f)\n", worldCoordinates.x, worldCoordinates.y, worldCoordinates.z);
Now the translation is where the trouble starts. Currently I'm drawing a cube with dimensions (dimensionX, dimensionY, dimensionZ) and translate to the center of that cube, so my zooming happens to the center point as well. I'm achieving zooming by translating in z-direction (dolly):
// Set ModelViewMatrix
modelViewMatrix = glm::mat4(1.0); // Start with the identity as the transformation matrix
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(0.0, 0.0, -translate_z)); // Zoom in or out by translating in z-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_x, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the whole szene in x-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_y, glm::vec3(0.0f, 1.0f, 0.0f)); // Rotate the whole szene in y-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, -90.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the camera by 90 degrees in negative x-direction to get a frontal look on the szene
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(-dimensionX/2.0f, -dimensionY/2.0f, -dimensionZ/2.0f)); // Translate the origin to be the center of the cube
glBindBuffer(GL_UNIFORM_BUFFER, globalMatricesUBO);
glBufferSubData(GL_UNIFORM_BUFFER, sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
I tried to replace the translation to the center of the cube by translating to the worldCoordinates vector, but this didn't work. I also tried to scale the vector by width or height.
Am I missing out on some essential step here?
Maybe this won't work in your case. But to me this seems like the best way to handle this. Use glulookat() to look at the xyz position of the mouse click that you have already found. Then change the gluPerspective() to a smaller angle of view to achieve the actual zoom.
I am working on rendering a terrain in OpenGL.
My code is the following:
void Render_Terrain(int k)
{
GLfloat angle = (GLfloat) (k/40 % 360);
//PROJECTION
glm::mat4 Projection = glm::perspective(45.0f, 1.0f, 0.1f, 100.0f);
//VIEW
glm::mat4 View = glm::mat4(1.);
//ROTATION
//View = glm::rotate(View, angle * -0.1f, glm::vec3(1.f, 0.f, 0.f));
//View = glm::rotate(View, angle * 0.2f, glm::vec3(0.f, 1.f, 0.f));
//View = glm::rotate(View, angle * 0.9f, glm::vec3(0.f, 0.f, 1.f));
View = glm::translate(View, glm::vec3(0.f,0.f, -4.0f)); // x, y, z position ?
//MODEL
glm::mat4 Model = glm::mat4(1.0);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MVP_matrix"), 1, GL_FALSE, glm::value_ptr(MVP));
//Transfer additional information to the vertex shader
glm::mat4 MV = Model * View;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "MV_matrix"), 1, GL_FALSE, glm::value_ptr(MV));
glClearColor(0.0, 0.0, 0.0, 1.0);
glDrawArrays(GL_LINE_STRIP, terrain_start, terrain_end );
}
I can do a rotation around the X,Y,Z axis, scale my terrain but I can't find a way to move the camera. I am using OpenGL 3+ and I am kinda new to graphics.
The best way to move the camera would be through the use of gluLookAt(), it simulates camera movement since the camera cannot be moved whatsoever. The function takes 9 parameters. The first 3 are the XYZ coordinates of the eye which is where the camera is exactly located. The second 3 parameters are the XYZ coordinates of the center which is the point the camera is looking at from the eye. It is always going to be the center of the screen. The third 3 parameters are the XYZ coordinates of the UP vector which points vertically upwards from the eye. Through manipulating those 3 XYZ coordinates you can simulate any camera movement you want.
Check out this link.
Further details:
-If you want for example to rotate around an object you rotate your eye around the up vector.
-If you want to move forward or backwards you add or subtract to the eye as well as the center points.
-If you want to tilt the camera left or right you rotate your up vector around your look vector where your look vector is center - eye.
gluLookAt operates on the deprecated fixed function pipeline, so you should use glm::lookAt instead.
You are currently using a constant vector for translation. In the commented out code (which I assume you were using to test rotation), you use angle to adjust the rotation. You should have a similar variable for translation. Then, you can change the glm::translate call to:
View = glm::translate(View, glm::vec3(x_transform, y_transform, z_transform)); // x, y, z position ?
and get translation.
You should probably pass in more than one parameter into Render_Terrain, as translation and rotation need at least six parameters.
In OpenGL the camera is always at (0, 0, 0). You need to set the matrix mode to GL_MODELVIEW, and then modify or set the model/view matrix using things like glTranslate, glRotate, glLoadMatrix, etc. in order to make it appear that the camera has moved. If you're using GLU, you can use gluLookAt to point the camera in a particular direction.