I've got a small scene with a loaded mesh, a ground plane and a skybox. I am generating a cube, and using the vertex positions as the cubemap texture co-ordinates.
Horizontal rotation (about the y-axis) works perfectly and the world movement is aligned with the skybox. Vertical rotation (about the camera's x-axis) doesn't seem to match up with the movement of the other objects, except that the strange thing is that when the camera is looking at the center of a cube face, everything seems aligned. In other words, the movement is non-linear and I'll try my best to illustrate the effect with some images:
First, the horizontal movement which as far as I can tell is correct:
Facing forward:
Facing left at almost 45Deg:
Facing left at 90Deg:
And now the vertical movement which seems to have some discrepancy in movement:
Facing forward again:
Notice the position of the ground plane in relation to the skybox in this image. I rotated slightly left to make it more apparent that the Sun is being obscured when it shouldn't.
Facing slightly down:
Finally, a view straight up to show the view is correctly centered on the (skybox) cube face.
Facing straight up:
Here's my drawing code, with the ground plane and mesh drawing omitted for brevity. (Note that the cube in the center is a loaded mesh, and isn't generated by the same function for the skybox).
void MeshWidget::draw() {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glRotatef(-rot_[MOVE_CAMERA][1], 0.0f, 1.0f, 0.0f);
glRotatef(-rot_[MOVE_CAMERA][0], 1.0f, 0.0f, 0.0f);
glRotatef(-rot_[MOVE_CAMERA][2], 0.0f, 0.0f, 1.0f);
glDisable(GL_DEPTH_TEST);
glUseProgramObjectARB(shader_prog_ids_[2]);
glBindBuffer(GL_ARRAY_BUFFER, SkyBoxVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vec3), BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, SkyIndexVBOID);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glUseProgramObjectARB(shader_prog_ids_[0]);
glEnable(GL_DEPTH_TEST);
glPopMatrix();
glTranslatef(0.0f, 0.0f, -4.0f + zoom_factor_);
glRotatef(rot_[MOVE_CAMERA][0], 1.0f, 0.0f, 0.0f);
glRotatef(rot_[MOVE_CAMERA][1], 0.0f, 1.0f, 0.0f);
glRotatef(rot_[MOVE_CAMERA][2], 0.0f, 0.0f, 1.0f);
glPushMatrix();
// Transform light to be relative to world, not camera.
glRotatef(rot_[MOVE_LIGHT][1], 0.0f, 1.0f, 0.0f);
glRotatef(rot_[MOVE_LIGHT][0], 1.0f, 0.0f, 0.0f);
glRotatef(rot_[MOVE_LIGHT][2], 0.0f, 0.0f, 1.0f);
float lightpos[] = {10.0f, 0.0f, 0.0f, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION, lightpos);
glPopMatrix();
if (show_ground_) {
// Draw ground...
}
glPushMatrix();
// Transform and draw mesh...
glPopMatrix();
}
And finally, here's the GLSL code for the skybox, which generates the texture co-ordinates:
Vertex shader:
void main()
{
vec4 vVertex = vec4(gl_ModelViewMatrix * gl_Vertex);
gl_TexCoord[0].xyz = normalize(vVertex).xyz;
gl_TexCoord[0].w = 1.0;
gl_TexCoord[0].x = -gl_TexCoord[0].x;
gl_Position = gl_Vertex;
}
Fragment shader:
uniform samplerCube cubeMap;
void main()
{
gl_FragColor = texture(cubeMap, gl_TexCoord[0]);
}
I'd also like to know if using quaternions for all camera and object rotations would help.
If you need any more information (or images), please ask!
I think you should be generating your skybox texture lookup based on a worldspace vector (gl_Vertex?) not a view space vector (vVertex).
I'm assuming your skybox coordinates are already defined in worldspace as I don't see a model matrix transform before drawing it (only the camera rotations). In that case, you should be sampling the skybox texture based on the worldspace position of a vertex, it doesn't need to be transformed by the camera. You're already translating the vertex by the camera, you shouldn't need to translate the lookup vector as well.
Try replacing normalize(vVertex) with normalize(gl_Vertex) and see if that improves things.
Also I might get rid of the x = -x thing, I suspect that was put in to compensate for the fact that the texture was rotating in the wrong direction originally?
I'd also like to know if using quaternions for all camera and object rotations would help.
Help how? It doesn't offer any new functionality over using matrices. I've heard arguments both ways as to whether matrixes or quaternions have better performance, but I see no need to use them.
Related
I am loading an object through an obj file in opengl using GLM Library but it comes out on the screen upside down. Also i am providing a capability to user so he can rotate all the objects accordingly by mouse too. e.g. zoom and zoom in and rotate with y axis.
Problem is i dont know how to rotate the object first to make its face according to what i need. After that i want to draw this object and offcourse at that time mouse can play its role to rotate it.
my draw function contain the following code
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glTranslatef(m_fPosX, m_fPosY,-m_fZoom);
glRotatef(m_fRotX, 1.0f, 0.0f, 0.0f);//these two for mouse movement
glRotatef(m_fRotY, 0.0f, 1.0f, 0.0f);
glRotated(180,0.0f, 0.0f, -1.0f);
glDisable(GL_TEXTURE_2D);
glColor3f(0.90f,0.90f,0.90f);
glmDraw(m_p3dModel,GLM_SMOOTH | GLM_MATERIAL);
glPopMatrix();
here rotate functions are all dependent on mouse movement but if it get load upside down i dont know how to first make my object face right direction and then allow it in this draw function...Which means i need to set its face before calling this draw function.
glRotatef(m_fRotY, 1.0f, 0.0f, 0.0f);
should be
glRotatef(m_fRotY, 0.0f, 1.0f, 0.0f);
copy/paste is the mother of many evils :o)
For transformation / projection / model view tutorial : check link
I heard that I should use normals instead of colors, because colors are deprecated. (Is that true?) Normals have something to do with the reflection of light, but I can't find a clear and intuitive explanation. What is a normal?
A normal in general is a unit vector whose direction is perpendicular to a surface at a specific point. Therefore it tells you in which direction a surface is facing. The main use case for normals are lighting calculations, where you have to determine the angle (or practically often its cosine) between the normal at a given surface point and the direction towards a lightsource or a camera.
glNormal minimal example
glNormal is a deprecated OpenGL 2 method, but it is simple to understand, so let's look into it. The modern shader alternative is discussed below.
This example illustrates some details of how glNormal works with diffuse lightning.
The comments of the display function explain what each triangle means.
#include <stdlib.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
/* Triangle on the x-y plane. */
static void draw_triangle() {
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f( 1.0f, -1.0f, 0.0f);
glEnd();
}
/* A triangle tilted 45 degrees manually. */
static void draw_triangle_45() {
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f( 1.0f, -1.0f, 0.0f);
glEnd();
}
static void display(void) {
glColor3f(1.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
/*
Triangle perpendicular to the light.
0,0,1 also happens to be the default normal if we hadn't specified one.
*/
glNormal3f(0.0f, 0.0f, 1.0f);
draw_triangle();
/*
This triangle is as bright as the previous one.
This is not photorealistic, where it should be less bright.
*/
glTranslatef(2.0f, 0.0f, 0.0f);
draw_triangle_45();
/*
Same as previous triangle, but with the normal set
to the photorealistic value of 45, making it less bright.
Note that the norm of this normal vector is not 1,
but we are fine since we are using `glEnable(GL_NORMALIZE)`.
*/
glTranslatef(2.0f, 0.0f, 0.0f);
glNormal3f(0.0f, 1.0f, 1.0f);
draw_triangle_45();
/*
This triangle is rotated 45 degrees with a glRotate.
It should be as bright as the previous one,
even though we set the normal to 0,0,1.
So glRotate also affects the normal!
*/
glTranslatef(2.0f, 0.0f, 0.0f);
glNormal3f(0.0, 0.0, 1.0);
glRotatef(45.0, -1.0, 0.0, 0.0);
draw_triangle();
glPopMatrix();
glFlush();
}
static void init(void) {
GLfloat light0_diffuse[] = {1.0, 1.0, 1.0, 1.0};
/* Plane wave coming from +z infinity. */
GLfloat light0_position[] = {0.0, 0.0, 1.0, 0.0};
glClearColor(0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_SMOOTH);
glLightfv(GL_LIGHT0, GL_POSITION, light0_position);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light0_diffuse);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glColorMaterial(GL_FRONT, GL_DIFFUSE);
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_NORMALIZE);
}
static void reshape(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 7.0, -1.0, 1.0, -1.5, 1.5);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(800, 200);
glutInitWindowPosition(100, 100);
glutCreateWindow(argv[0]);
init();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return EXIT_SUCCESS;
}
Theory
In OpenGL 2 each vertex has its own associated normal vector.
The normal vector determines how bright the vertex is, which is then used to determine how bright the triangle is.
OpenGL 2 used the Phong reflection model, in which light is separated into three components: ambient, diffuse and specular. Of those, diffuse and specular components are affected by the normal:
if the diffuse light is perpendicular to the surface, it makes is brighter, no matter where the observer is
if the specular light hits the surface, and bounces off right into the eye of the observer, that point becomes brigher
glNormal sets the current normal vector, which is used for all following vertexes.
The initial value for the normal before we all glNormal is 0,0,1.
Normal vectors must have norm 1, or else colors change! glScale also alters the length of normals! glEnable(GL_NORMALIZE); makes OpenGL automatically set their norm to 1 for us. This GIF illustrates that beautifully.
Why it is useful to have normals per vertexes instead of per faces
Both spheres below have the same number of polygons. The one with normals on the vertexes looks much smoother.
OpenGL 4 fragment shaders
In newer OpenGL API's, you pass the normal direction data to the GPU as an arbitrary chunk of data: the GPU does not know that it represents the normals.
Then you write a hand-written fragment shader, which is an arbitrary program that runs in the GPU, which reads the normal data you pass to it, and implements whatever lightning algorithm you want. You can implement Phong efficiently if you feel like it, by manually calculating some dot products.
This gives you full flexibility to change the algorithm design, which is a major features of modern GPUs. See: https://stackoverflow.com/a/36211337/895245
Examples of this can be found in any of the "modern" OpenGL 4 tutorials, e.g. https://github.com/opengl-tutorials/ogl/blob/a9fe43fedef827240ce17c1c0f07e83e2680909a/tutorial08_basic_shading/StandardShading.fragmentshader#L42
Bibliography
https://gamedev.stackexchange.com/questions/50653/opengl-why-do-i-have-to-set-a-normal-with-glnormal
https://www.opengl.org/sdk/docs/man2/xhtml/glNormal.xml
http://www.tomdalling.com/blog/modern-opengl/06-diffuse-point-lighting/
http://learnopengl.com/#!Advanced-Lighting/Normal-Mapping
Many things are now deprecated, including normals and colors. That just means that you have to implement them yourself. With normals you can shade your objects. It's up to you to make the calculations but there are a lot of tutorials on e.g. Gouraud/Phong shading.
Edit: There are two types of normals: face normals and vertex normals. Face normals point away from the triangle, vertex normals point away from the vertex. With vertex normals you can achieve better quality, but there are many uses also for face normals, e.g. they can be used in collision detection and shadow volumes.
I have the same qustion as in the title :/
I make something like:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0, -zoom);
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
//Here model render :/
And in my app camera is rotating not model :/
Presumably the reason you believe it's your camera moving and not your model is because all the objects in the scene are moving together?
After rendering your model and before rendering other objects, are you resetting the MODELVIEW matrix? In other words, are you doing another glLoadIdentity() or glPopMatrix() after you render the model you're talking about and before rendering other objects?
Because if not, whatever transformations you applied to that model will also apply to other objects rendered, and it will be as if you rotated the whole world (or the camera).
I think there may be another problem with your code though:
glTranslatef(0.0f, 0, -zoom);
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
//Here model render :/
Are you trying to rotate your model around the point (0, 0, -zoom)?
Normally, in order to rotate around a certain point (x,y,z), you do:
Translate (x,y,z) to the origin (0,0,0) by translating by the vector (-x,-y,-z)
Perform rotations
Translate the origin back to (x,y,z) by translating by the vector (x,y,z)
Draw
If you are trying to rotate around the point (0,0,zoom), you are missing step 3.
So try adding this before you render your model:
glTranslatef(0.0f, 0, zoom); // translate back from origin
On the other hand if you are trying to rotate the model around the origin (0,0,0), and also move it along the z-axis, then you will want your translation to come after your rotation, as #nonVirtual said.
And don't forget to reset the MODELVIEW matrix before you draw other objects. So the whole sequence would be something like:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
#if you want to rotate around (0, 0, -zoom):
glTranslatef(0.0f, 0, -zoom);
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
glTranslatef(0.0f, 0, zoom); // translate back from origin
#else if you want to rotate around (0, 0, 0):
glRotatef(yr*20+moveYr*20, 0.0f, 1.0f, 0.0f);
glRotatef(zr*20+moveZr*20, 1.0f, 0.0f, 0.0f);
glTranslatef(0.0f, 0, -zoom);
#endif
//Here model render :/
glLoadIdentity();
// translate/rotate for other objects if necessary
// draw other objects
You use GL_MODELVIEW when defining the transformation for your objects and GL_PROJECTION to transform your viewpoint.
I believe it has to do with the order in which you are applying your transformations. Try switching the order of the call to glRotate and glTranslate. As it is now you are moving it outward from the origin, then rotating it about the origin, which will give the appearance of the object orbiting around the camera. If you instead rotate it while it is still at the origin, then move it, it should give you the desired result. Hope that helps.
How can I retrieve the current position of the vertices after they have been transformed? I have the following code....
How can I get the position of the "modelVertices" after the transform has been applied?
I am really after the screen coordinates so I can tell if a vert has been clicked by the mouse.
glMatrixMode(GL_MODELVIEW);
// save model transform matrix
GLfloat currentModelMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelMatrix);
// clear the model transform matrix
glLoadIdentity();
// rotate the x and y axis of the model transform matrix
glRotatef(rotateX, 1.0f, 0.0f, 0.0f);
glRotatef(rotateY, 0.0f, 1.0f, 0.0f);
// reapply the previous transforms
glMultMatrixf(currentModelMatrix);
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// since each vertes is defined in 3D instead of 2D parameter one has been
// upped from 2 to 3 to respect this change.
glVertexPointer(3, GL_FLOAT, 0, modelVertices);
glEnableClientState(GL_VERTEX_ARRAY);
GLU has the function gluProject() and gluUnProject():
http://www.opengl.org/sdk/docs/man/xhtml/gluProject.xml
If your platform doesn't have GLU you can always grab the code for MESA and see how it is implemented.
You could use the feedback buffer.
I draw buildings in my game world, i shade them with the following code:
GLfloat light_ambient[] = {0.0f, 0.0f, 0.0f, 1.0f};
GLfloat light_position[] = {135.66f, 129.83f, 4.7f, 1.0f};
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHT0);
glEnable(GL_COLOR_MATERIAL);
glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glColorMaterial(GL_FRONT, GL_AMBIENT);
It works nicely.
But when i start flying in the world, the lighting reacts to that as if the world was an object that is being rotated. So the lights changes when my camera angle changes.
How do i revert that rotation? so the lighting would think that i am not actually rotating the world, and then i could make my buildings have static shading which would change depending on where the sun is on the sky.
Edit: here is the rendering code:
int DrawGLScene()
{
// stuff
glLoadIdentity();
glRotatef(XROT, 1.0f, 0.0f, 0.0f);
glRotatef(YROT, 0.0f, 1.0f, 0.0f);
glRotatef(ZROT, 0.0f, 0.0f, 1.0f);
glTranslatef(-XPOS, -YPOS, -ZPOS);
// draw world
}
http://www.opengl.org/resources/faq/technical/lights.htm
See #18.050
In short, you need to make sure you're defining your light position in the right reference frame and applying the appropriate transforms each frame to keep it where you want it.
Edit:
With the removal of the fixed function pipeline in OpenGL 3.1 the code and answer here are deprecated. The correct answer now is to pass your light position(s) into your vertex/fragment shaders and perform your shading using that position in world space. This calculation varies based on the type of lighting you're doing (PBR, phong, deferred etc).
I believe your problem arises from the fact that OpenGL Lights are specified in World Coordinates, but stored internally in Camera Coordinates. This means that once you place a light, whenever you move your camera, your light will move with it.
To fix this, simply reposition your lights whenever you move your camera.