glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//set viewpoint
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(VIEW_ANGLE,Screen_Ratio,NEAR_CLIP,FAR_CLIP);
gluLookAt(0,5,5, 0,0,0, 0,1,0);
//transform model 1
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Theta, 0,1,0);
//draw model 1
glBegin(GL_QUADS);
...
glEnd();
The code above works fine, but is there any way to remove the call to gluPerspective?
What I mean is, I would like to call it only once in initialization, instead of repeatedly during each rendering.
You call gluPerspective there, because it belongs there. OpenGL is not a scene graph where you initialize things. It's a state driven drawing API. The projection matrix is a state and every serious graphics application changes this state multiple times throughout a single frame rendering.
OpenGL does not know geometrical objects, positions and cameras. It just pushes points, lines and triangles through a processing pipeline, and draws the result to the screen. After something has been drawn, OpenGL has no recollection of it, whatsoever.
I mean calling it only once in initialization.
OpenGL is not initialized (except creation of the rendering context, but actually this is part of the operating system's graphics stack, not OpenGL). Sure, you upload textures and buffer object data to it, but that can happen anytime.
Do not use gluLookAt on the projection matrix, as it defines the camera/view and therefore belongs to the modelview matrix, usually as the left-most transformation (the first after glLoadIdentity), where it makes up the view part of the word modelview. Although it also works your way, it's conceptually wrong. This would also solve your issue, as then you just don't have to touch the projection matrix every frame.
But actually datenwolf's approach is more conceptually clean regarding OpenGL's state machine architecture.
If you don't call glLoadIdentity() (which resets the current matrix to be the identity matrix, i.e. undoes what gluPerspective() has done) every frame and instead carefully push/pop the transform matrices you can get away with calling it only in initialization quite happily. Usually it's far easier just to call load identity each time your start drawing and then reset it. e.g.:
// Initalisation
glLoadIdentity();
gluPerspective(...);
Then later on:
// Drawing each frame
glClear(...);
glPushMatrix();
gluLookAt(...);
//draw stuff
glPopMatrix();
Related
I have written this piece of code in c to draw a planet system with a sun and a planet
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.0,0.0,0.0);
glMatrixMode(GL_MODELVIEW_MATRIX);
glPushMatrix();
glutWireSphere(1.0, 20, 16);
glLoadIdentity();
glRotatef((GLfloat) year ,0.0,1.0,0.0);
glTranslatef(2.0, 0.0, 0.0);
glRotatef((GLfloat) day, 0.0, 1.0, 0.0);
glutWireSphere(0.2, 10, 8);
glPopMatrix();
glutSwapBuffers();
But when I compile and run the code, second sphere is not in the view. After I increase the value of y, it appears for half of the rotation when it is behind the centre sphere and then goes out of View again, also the size of the sphere is larger than expected. if I comment out the call to glLoadIdentity(), everything works fine. As much as I know, glLoadIdentity() loads the current Matrix(ModelView_Matrix) with an identity matrix so that the effect of all the translations and rotations is reversed but why in this case the objects drawn when its called are in different manner when no rotations or transformations are being performed before its call?
I suspect that you do have other transformations in the ModelView matrix. If it was already set to the identity, then loading identity would not change the behaviour.
The code you have provided only sets up a local model transform. There are no projection or view transformations being applied, so these must be done elsewhere. Perhaps at initialisation.
Without a prior view transformation, the first sphere would be drawn at the same position as the view - in other words, the camera would be inside the sphere.
I suspect your problem is related to the way you make use of glPushMatrix and glPopMatrix in conjunction with glLoadIdentity:
Suppose you have some matrix A on your modelview stack. Now you use glPushMatrix, essentially saving A to restore it later. Inside the push-pop block, you tell GLUT to draw a sphere, which it dutyfully does, using the previously mentioned modelview matrix A.
If you now call glLoadIdentity, anything that A was before, is gone until you call glPopMatrix, which restores the previous state of A.
So in short, transformation calls after glLoadIdentity are based on the identity matrix rather than the matrix state previous to glPushMatrix.
Seeing that what you want to do is drawing a solar system. It'd probably be best to do something like:
glPushMatrix();
/* Transformations positioning the sun */
/* Draw the sun */
glPushMatrix();
/* Transformations for getting from the sun to the planet */
/* Draw the planet */
glPopMatrix();
glPopMatrix();
As an aside, the fixed-function pipeline (glTranslate, glRotate, glPushMatrix, ...) has been deprecated for quite a while now (about 10 years?), so I'd suggest picking up OpenGL using a more modern approach.
Edit:
While re-reading the question, another thing struck my attention:
Assuming year is designating where the planet is in its revolution around the sun and day specifies how much the plant is rotated around its own polar axis, the correct order of transformations would be:
Rotate by day around the y-axis
Translate
Rotate by year around the y-axis
(Right now, you're doing it the opposite way, which could explain the strange behaviour you're experiencing. See also here.)
According to the comment by JasonD, your transformation order was right in the first place, but still it won't hurt to know about the background. ;)
I made a 3D scene and I used glOrtho and gluOrtho2D to get things to stay on my screen when I move the camera to look around in my 3D scene. But when I start to look around the characters disappear.
How do you get the characters to stay on your screen.
The projection matrix kind of defines your lens. But no matter what lens you use, if you turn the scene or move the camera, the view will change.
How do you get the characters to stay on your screen.
Well, by keeping the "camera" in place.
OpenGL actually doesn't have a camera. It doesn't even have a scene. The only thing it sees are points, lines and triangles it draws one after another to the screen. What OpenGL has are transformation matrices. And in your case, all you have to do is set a projection and modelview, that will draw the characters at the desired place on the screen. And since OpenGL does not maintain a scene, you can change the transformation matrices anytime you want.
You probably forgot a "glLoadIdentity();" somewhere...
After your calls to glOrtho...
glOrtho(0.0, windowWidth, 0.0, windowHeight, -10.0, 10.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
Hope this helps.
-kropcke
I'm having trouble with OpenGL lighting. My issue is this: When the object has 0 rotation, the lighting is fine- otherwise the lighting works, but rotates with the object, instead of staying fixed in regards to the scene.
Sounds simple, right? The OpenGL FAQ has some simple advice on this: coordinates passed to glLightfv(GL_LIGHT0, GL_POSITION...) are multiplied by the current MODELVIEW matrix. So I must be calling this at the wrong place... except I'm not. I've copied the MODELVIEW matrix into a variable to debug, and it stays the same regardless of how my object is rotated. So it has to be something else, but I'm at a loss as to what.
I draw the model using glDrawArrays, and position my model within the world using glMatrixMult on a matrix built from a rotation quaternion and a translation. All of this takes place within glPushMatrix/glPopMatrix, so shouldn't have any side effect on the light.
A cut down version of my rendering process looks like this:
//Setup our camera
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
cameraMatrix = translate(Vector3D(Pos.mX,Pos.mY,Pos.mZ)) * camRot.QuatToMatrix();
glMultMatrixf((GLfloat*)&cameraMatrix);
//Position the light now
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
GLfloat lp[4] = {lightPos.mX, lightPos.mY, lightPos.mZ, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) lp);
//Loop, doing this for each model: (mRot, mPos, and mi are model member variables)
matrix = translate(Vector3D(mPos.mX,mPos.mY,mPos.mZ)) * mRot.QuatToMatrix();
glPushMatrix();
glMultMatrixf((GLfloat*)&matrix);
glBindBuffer(GL_ARRAY_BUFFER, mi->mVertexBufHandle); //Bind the model VBO.
glDrawArrays(GL_TRIANGLES, 0, mi->verts); //Draw the object
glPopMatrix();
I thought the normals might be messed up, but when I render them out they look fine. Is there anything else that might effect openGL lighting? The FAQ mentions:
If your light source is part of a
light fixture, you also may need to
specify a modeling transform, so the
light position is in the same location
as the surrounding fixture geometry.
I took this to mean that you'd need to translate the light into the scene, kind of a no-brainer... but does it mean something else?
It might be minor, but in this line:
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) &lp);
remove the & (address operator). lp will already give you the array-address.
This was awhile back, but I did eventually figure out the problem. The issue I thought I was having was that the light's position got translated wrong. Picture this: the light was located at 0,0,0, but then I translated and rotated my mesh. If this had been the case, I'd have to do as suggested in the other answers and make certain I was placing my glLightfv calls in the right place.
The actual problem turned out to be much simpler, yet much more insidious. It turns out I wasn't setting the glNormalPointer correctly, and so it was being fed garbage data. While debugging, I'd render the normals to check that they were correct, but when doing so I'd manually draw them based on the positions I'd calculated. A recommendation to future debuggers: when drawing your debug info normal rays, make sure you feed the debug function /the same data/ as openGL gets. In my case, this would mean pointing my normal ray draw function's glVertexPointer to the same place as the model's glNormalPointer.
Basically an OpenGL light behaves like a vertex. So in your code it's transformed by cameraMatrix, while your meshes are transformed by cameraMatrix * matrix. Now, it looks like both cameraMatrix and matrix contain mrot.QuatToMatrix(), that is: there is a single rotation matrix there, and the light gets rotated once, while the objects get rotated twice. It doesn't look right to me, unless your actual code is different; the mRot matrix you use for each mesh should be its own, e.g. mRot[meshIndex].
I've been working on a game engine for awhile. I've started out with 2D Graphics with just SDL but I've slowly been moving towards 3D capabilities by using OpenGL. Most of the documentation I've seen about "how to get things done," use GLUT, which I am not using.
The question is how do I create a "camera" in OpenGL that I could move around a 3D environment and properly display 3D models as well as sprites (for example, a sprite that has a fixed position and rotation). What functions should I be concerned with in order to setup a camera in OpenGL camera and in what order should they be called in?
Here is some background information leading up to why I want an actual camera.
To draw a simple sprite, I create a GL texture from an SDL surface and I draw it onto the screen at the coordinates of (SpriteX-CameraX) and (SpriteY-CameraY). This works fine but when moving towards actual 3D models it doesn't work quite right. The cameras location is a custom vector class (i.e. not using the standard libraries for it) with X, Y, Z integer components.
I have a 3D cube made up of triangles and by itself, I can draw it and rotate it and I can actually move the cube around (although in an awkward way) by passing in the camera location when I draw the model and using that components of the location vector to calculate the models position. Problems become evident with this approach when I go to rotate the model though. The origin of the model isn't the model itself but seems to be the origin of the screen. Some googling tells me I need to save the location of the model, rotate it about the origin, then restore the model to its origal location.
Instead of passing in the location of my camera and calculating where things should be being drawn in the Viewport by calculating new vertices, I figured I would create an OpenGL "camera" to do this for me so all I would need to do is pass in the coordinates of my Camera object into the OpenGL camera and it would translate the view automatically. This tasks seems to be extremely easy if you use GLUT but I'm not sure how to set up a camera using just OpenGL.
EDIT #1 (after some comments):
Following some suggestion, here is the update method that gets called throughout my program. Its been updated to create perspective and view matrices. All drawing happens before this is called. And a similar set of methods is executed when OpenGL executes (minus the buffer swap). The x,y,z coordinates are straight an instance of Camera and its location vector. If the camera was at (256, 32, 0) then 256, 32 and 0 would be passed into the Update method. Currently, z is set to 0 as there is no way to change that value at the moment. The 3D model being drawn is a set of vertices/triangles + normals at location X=320, Y=240, Z=-128. When the program is run, this is what is drawn in FILL mode and then in LINE mode and another one in FILL after movement, when I move the camera a little bit to the right. It likes like may Normals may be the cause, but I think it has moreso to do with me missing something extremely important or not completely understanding what the NEAR and FAR parameters for glFrustum actually do.
Before I implemented these changes, I was using glOrtho and the cube rendered correctly. Now if I switch back to glOrtho, one face renders (Green) and the rotation is quite weird - probably due to the translation. The cube has 6 different colors, one for each side. Red, Blue, Green, Cyan, White and Purple.
int VideoWindow::Update(double x, double y, double z)
{
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glFrustum(0.0f, GetWidth(), GetHeight(), 0.0f, 32.0f, 192.0f);
glMatrixMode( GL_MODELVIEW );
SDL_GL_SwapBuffers();
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(0, 1.0f, 0.0f, 0.0f);
glRotatef(0, 0.0f, 1.0f, 0.0f);
glRotatef(0, 0.0f, 0.0f, 1.0f);
glTranslated(-x, -y, 0);
return 0;
}
EDIT FINAL:
The problem turned out to be an issue with the Near and Far arguments of glFrustum and the Z value of glTranslated. While change the values has fixed it, I'll probably have to learn more about the relationship between the two functions.
You need a view matrix, and a projection matrix. You can do it one of two ways:
Load the matrix yourself, using glMatrixMode() and glLoadMatrixf(), after you use your own library to calculate the matrices.
Use combinations of glMatrixMode(GL_MODELVIEW) and glTranslate() / glRotate() to create your view matrix, and glMatrixMode(GL_PROJECTION) with glFrustum() to create your projection matrix. Remember - your view matrix is the negative translation of your camera's position (As it's where you should move the world to relative to the camera origin), as well as any rotations applied (pitch/yaw).
Hope this helps, if I had more time I'd write you a proper example!
You have to do it using the matrix stack as for object hierarchy,
but the camera is inside the hierarchy so you have to put the inverse transform on the stack before drawing the objects as openGL only uses the matrix from 3D to camera.
If you have not checked then may be looking at following project would explain in detail what "tsalter" wrote in his post.
Camera from OGL SDK (CodeColony)
Also look at Red book for explanation on viewing and how does model-view and projection matrix will help you create camera. It starts with good comparison between actual camera and what corresponds to OpenGL API. Chapter 3 - Viewing
I'd like to try and implement some HCI for my existing OpenGL application. If possible, the menus should appear infront of my 3D graphics which would be in the background.
I was thinking of drawing a square directly in front of the "camera", and then drawing either textures or more primatives on top of the "base" square.
While the menus are active the camera can't move, so that the camera doesn't look away from the menus.
Does this sound far feteched to anyone or am I on the right tracks? How would everyone else do it?
I would just glPushMatrix, glLoadIdentity, do your drawing, then glPopMatrix and not worry about where your camera is.
You'll also need to disable and re-enable depth test, lighting and such
There is the GLUI library to do this (no personal experience)
Or if you are using Qt there are ways of rendering Qt widgets transparently on top of the OpenGL model, there is also beta support for rendering all of Qt in opengl.
You could also do all your 3d Rendering, then switch to orthographic projection and draw all your menu objects. This would be much easier than putting it all on a large billboarded quad as you suggested.
Check out this exerpt, specifically the heading "Projection Transformations".
As stated here, you need to apply a translation of 0.375 in x and y to get pixel perfect alignment:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.375, 0.375, 0.0);
/* render all primitives at integer positions */
The algorithm is simple:
Draw your 3D scene, presumably with depth testing enabled.
Disable depth testing so that your GUI elements will draw over the 3D stuff.
Use glPushMatrix to store you current model view and projection matrices (assuming you want to restore them - otherwise, just trump on them)
Set up your model view and projection matrices as described in the above code
Draw your UI stuff
Use glPushMatrix to restore your pushed matrices (assuming you pushed them)
Doing it like this makes the camera position irrelevant - in fact, as the camera moves, the 3D parts will be affected as normal, but the 2D overlay stays in place. I'm expecting that this is the behaviour you want.