I have a opengl project where im just trying to draw a red rectangle to the screen, th problem is that 1) it's huge, it takes up almost the entire screen, and 2) its tilted. Im really new to opengl, so I don't understand the coordinate system, and what a few functions do, such as the glOrtho() function.
Here's the code:
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_QUADS);
glColor3f(1, 0, 0); // NOT SURE WHERE THIS STARTS, AND HOW THE COORDINATES WORK
glVertex2f(-1.0f, 1.0f);
glVertex2f( 1.0f, 1.0f);
glVertex2f( 1.0f,-1.0f);
glVertex2f(-1.0f,-1.0f);
glEnd();
glFlush();
}
void init()
{
glClearColor(0.0, 0.0, 0.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 10.0, 0, 10.0, -1.0, 1.0); //What does this do and how does it's coordinates work?
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(30.0, 1.0, 1.0, 1.0);
glEnable(GL_DEPTH_TEST);
}
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitWindowSize(600, 600);
glutInitWindowPosition(250, 250);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE | GLUT_DEPTH);
glutCreateWindow("Model View");
glutDisplayFunc(display);
init();
glutMainLoop();
return 0;
}
Anyways, i'd prefer to make this into a learning experience, so please explain and link to things that would help! Thanks.
Procedure display is responsible for the actual drawing.
void display()
{
This line clears the buffer; the buffer is basically the memory are where the image is rendered; it is basically a matrix with width and height 600x600. To clear means to set every cell of the matrix to the same value. Every cell is a pixel and contains a color and a depth. With this call you are telling OpenGL to paint everything opaque black, and to reset the depth to 1. Why opaque black? Because of your call to glClearColor: the first three parameters are the red, green and blue component, and they can range between 0 and 1. 0,0,0 means black. For the last component you specified 1, which means opaque; 0 would be transparent. This last component is called alpha and is used when alpha blending is enabled. Why the clear depth is 1? Because 1 is the default, and you didn't call glClearDepth to override that value.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
This is telling OpenGL that you want to draw quadrilaterals.
glBegin(GL_QUADS);
You want these quadrilateral to be red (remember that the first component of a color is red).
glColor3f(1, 0, 0); // NOT SURE WHERE THIS STARTS, AND HOW THE COORDINATES WORK
Now you list the vertices of the quadrilateral (it's only one, four vertices); all vertices will be red because you never update the color by calling glColor3f; you can associate a different color to every vertex, the final result is usually very cute if you pick red (1,0,0), green (0,1,0), blue (0,0,1) and white (1,1,1); this quadrilateral should appear to screen as a square, because it is a square geometrically, your window is a square, and the camera (defined with glOrtho) has a square aspect (first four parameters of the call to glOrtho). If you didn't call glOrtho you would probably see only red, because the default OpenGL coordinates range between -1 and 1 and so you are covering the entire window.
glVertex2f(-1.0f, 1.0f);
glVertex2f( 1.0f, 1.0f);
glVertex2f( 1.0f,-1.0f);
glVertex2f(-1.0f,-1.0f);
This means that you are done with drawing.
glEnd();
Technically, it may be that OpenGL didn't send any of the commands you specified to the graphic card; commands may be enqueued for efficiency reason. Calling glFlush forces the command to be sent to the graphic card.
glFlush();
}
You wrote this function, init to initialize some of the OpenGL states, that you felt would remain stable across the application. In reality a real application like a game would have most of this stuff under display. For instance a game must continuously update the camera, as the player moves.
void init()
{
Here as we said before you are setting the clear color to be opaque black.
glClearColor(0.0, 0.0, 0.0, 1.0);
Here you are saying that the camera is not of a perspective type; basically things that are far away don't get smaller. It is similar to the view that an artificial satellite has of a city. In particular you are creating a camera which is not "centered" on the field of view: I recommend to use a call like glOrtho(-10.0, 10.0, -10.0, 10.0, -1.0, 1.0) for your first experiments. For a non perspective camera the coordinates that you specify here override the convention -1 to +1 that we mentioned above. Try to regulate the parameters such that your red square appears small, and centered.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 10.0, 0, 10.0, -1.0, 1.0); //What does this do and how does it's coordinates work?
Here you are basically positioning the camera relative to the square, or the square relative to the camera; there are infinite ways to see it. You are defining a geometrical transformation, and the reason why it is called MODELVIEW is that it is not uniquely something that alters the model (the square) or the view (the camera) but both, depending on the way you see it. However, your square appears rotated because you are calling glRotatef; remove it and the square should appear like a square.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(30.0, 1.0, 1.0, 1.0);
Depth test is a technique that uses the depth store in the buffer in order to remove hidden surfaces, for instance the back faces of a cube in a 3D scene. Your scene is 2D and you are drawing only a quad, so this is really non affecting your drawing.
glEnable(GL_DEPTH_TEST);
}
In the main you are interacting with glut, an optional subsystem which is not part of OpenGL but is useful to carry out some boring and tedious operations that only the operating system is authorized to perform.
int main(int argc, char *argv[])
{
First you must init glut.
glutInit(&argc, argv);
Then you define the window that will contain your rendering image.
glutInitWindowSize(600, 600);
glutInitWindowPosition(250, 250);
GLUT_RGB means that your window only supports red, green and blue, and doesn't have an alpha channel (this is very often the case). GLUT_DEPTH means that your buffer will be able to store the depth of each pixel. GLUT_SINGLE means that the window is single buffered, that is your commands will directly draw on the window; another option is double buffering, where you actually draw on a back buffer, and then you swap front and back buffer so that the rendered image appears suddenly, and not in a progressive fashion. Your scene is so simple that you shouldn't notice any difference between GLUT_SINGLE and GLUT_DOUBLE.
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE | GLUT_DEPTH);
Then you actually create the window.
glutCreateWindow("Model View");
You tell glut which function should be called to render the scene.
glutDisplayFunc(display);
Here you call your init function.
init();
This is a window loop, provided by glut. Most windowing system require a software loop in order to keep the window alive, and able to respond to clicks, drags, resize and keyboard strokes.
glutMainLoop();
return 0;
}
Long story short, several versions of OpenGL are available, and they can be programmed using several languages, and targeted to several platforms. The single most important difference between these versions is that some use a fixed function pipeline (FFP) whereas the newest versions have a programmable pipeline. Your program uses a fixed function pipeline. You should switch to a programmable pipeline whenever you can, because it is the modern way of doing computer graphics, and is much more flexible, even though it requires a little more programming, as the name suggests.
You should ignore the tutorials that I linked originally, I didn't immediately realized how outdated they were. You should go with the one recommended by datenwolf or, if you are interested in mobile development, you could consider learning OpenGL ES 2 (the 2 is important, because the previous version was fixed function). There is also a variant of OpenGL ES 2 for HTML5 and Javascript, called WebGL. You find the tutorials here, together with a ZIP file containing all the examples; I use their codebase whenever I need to check if I understood a new concept.
Cause you are looking at it funny :)
you have created a red square in (1,1) to (-1,-1) //display()
then said that the camera will look at it using orthogonal projection //glOrtho
(it creates a projection matrix, using a point to place the camera and giving it a direction)
and maybe a little tilted by glRotating the MODEL*VIEW*
PS You have to think about the gl commands as messages sent over to the gl subsystem and those messages alter the various it's states, what's in the scene, where the camera is, where the lights are, etc.. etc...
Related
I'm doing this OpenGL project for my Computer Graphics class, where I display an object and I rotate it and stuff, the thing is that at the beginning of the project we used glOrtho() and it looked really great.
But now the teacher said that we have to use glFrustum() for perspective and if I use that function, the object is drawn like this and I really don't know why does this happens:
This is my code from the init() function where everything changes:
void init (void)
{
/* select clearing (background) color */
glClearColor (0.0, 0.0, 0.0, 0.0);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-100.0, 100.0, -60.0, 160.0, -100.0, 100.0);
//glFrustum(-100, 100 ,-100 ,100 ,1 , 40);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90,0,1,0);
}
I'd appreciate your help.
EDIT: If I use glFrustum(-100, 100, -100, 100, 20, 200) it looks like this, like I'm getting closer but what about the left, right, top and bottom parameters? Are they okay with that values?
It's hard to be certain without more information. Perhaps the model could give some insight. But I suspect it may have to do with your clipping planes (nearVal and farVal as described here) arguments passed to glFrustum (1, 40). Perhaps try setting them to a broader range like your glOrtho call: 1, 150 (Note: neither nearVal or farVal can be negative when passed to glFrustum).
This all depends on the scale of the model and how it is positioned relative to the camera. If part of the model falls outside of the clipping planes, then it will be, well... clipped.
I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.
I am trying to draw a 2D scene with a texture as background and then ( as the program flows and does computations ) draw different primitives on the "canvas". As a test case I wanted to draw a blue quad on the background image.
I have looked at several resources and SO questions to try get the information I need to accomplish the task ( e.g. this tutorial for first primitive rendering, SOIL "example" for texture loading ).
My understanding was that the texture will be drawn on Z=0, and quad as well. Quad would thus "cover" a portion of texture - be drawn on it, which is what I want. Instead the result of my display function is my initial texture in black/blue colour, and not my texture ( in original colour ) with a blue quad drawn on it. This is the display function code :
void display (void) {
glClearColor (0.0,0.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
// background render
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f); // window size is 1024x512
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture );
glBegin (GL_QUADS);
glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
glEnd(); // here I get the texture properly displayed in window
glDisable(GL_TEXTURE_2D);
// foreground render
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,100.0);
glVertex2d(700.0,500.0);
glEnd(); // now instead of a rendered blue quad I get my texture coloured in blue
glutSwapBuffers(); }
I have already tried with many modifications, but since I am just beginning with OpenGL and don't yet understand a lot of it, my attempts failed. For example, I tried with pushing and popping matrices before and after drawing the quad, clearing the depth buffer, changing parameters in gluPerspective etc.
How do I have to modify my code so it will render the quad properly on top of the background texture image of my 2D scene ? Being a beginner, extra explanations of the modifications ( as well as mistakes in the present code ) and principles in general will be greatly appreciated.
EDIT - after answer by Reto Koradi :
I have tried to follow the instructions, and the modified code now looks like :
// foreground render
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS); // same from here on
Now I can see the blue "quad", but it is not displayed properly, it looks something like this .
Beside that, the whole scene is flashing really quickly.
What do I have to change in my code so that quad will get displayed properly and screen won't be flashing ?
You are setting up a perspective transformation before rendering the blue quad:
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
The way gluPerspective() is defined, it sets up a transformation that looks from the origin down the negative z-axis, with the near and far values specifying the distance range that will be visible. With this transformation, z-values from -1.0 to -100.0 will be visible. Which does not include your quad at z = 0.0.
If you want to draw your quad in 2D coordinate space, the easiest solution is to not use gluPerspective() at all. Just use a glOrtho() type transformation like you did for your initial drawing.
If you want perspective, you will need a GL_MODELVIEW transformation as well. You can start with a translation in the negative z-direction, within a range of 1.0 to 100.0. You may have to adjust your coordinates for the different coordinate system as well, or use additional transformations that also translate in xy-direction, and possibly scale.
The code also has the coordinates in the wrong order for drawing the blue quad. You either have to change the draw call to GL_TRIANGLE_STRIP (recommended because it at least gets you one step closer to using features that are not deprecated), or swap the order of the last two vertices:
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,500.0);
glVertex2d(700.0,100.0);
glEnd(GL_QUADS);
I'm writing a plugin for an application called Autodesk MotionBuilder, which has an OpenGL renderer, and I'm trying to render textured geometry into the scene. I have a window with a 3D View embedded in it, and every time my window is rendered, this is (in a nutshell) what happens:
I tell the renderer that I'm about to draw into a region with a given size
I tell the renderer to draw the MotionBuilder scene in that region
I draw some additional stuff into and/or on top of the scene
The challenge here is that I'm inheriting some arbitrary OpenGL state from MotionBuilder's renderer, which varies depending on what it's drawing and what's present in the scene. I've been dealing with this fine so far, but there's one thing I can't figure out. The way that OpenGL interprets my UV coordinates seems to change based on whatever MotionBuilder is doing behind my back.
Here's my rendering code. If there's no textured geometry in the scene, meaning MotionBuilder hasn't yet fiddled with any texture-related attributes, it works as expected.
// Tell MotionBuilder's renderer to draw the scene
RenderScene();
// Clear whatever arbitrary state MotionBuilder left for us
InitializeAttributes(); // includes glPushAttrib(GL_ALL_ATTRIB_BITS)
InitializePerspective(); // projects into the scene / loads matrices
// Enable texturing, bind to our texture, and draw a triangle into the scene
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, mTexture);
glBegin(GL_TRIANGLES);
glColor4f(1.0, 1.0, 1.0, 0.5f);
glTexCoord2f(1.0, 0.0); glVertex3f(128.0, 0.0, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f( 0.0, 128.0, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f( 0.0, 0.0, 0.0);
glEnd();
// Clean up so we don't confound MotionBuilder's initial expectations
RestoreState(); // includes glPopAttrib()
Now, if I bring in some meshes with textures, something odd happens. My texture coordinates get scaled way up. Here's a before and after:
(source: awforsythe.com)
As you can see from the close-up on the right, when MotionBuilder is asked to render a texture whose file it can't find, it instead loads this small question mark texture and tiles it across the geometry. My only hypothesis is that MotionBuilder is changing some global texture coordinate scalar so that, for example, glTexCoord2f(0.5, 1.0) will instead be interpreted as if it were (50.0, 100.0). Is there such a feature in OpenGL? Any idea what I need to modify in order to preserve my texture coordinates as I've entered them?
Since typing the above and after doing a bit of research, I have discovered that there's a GL_TEXTURE matrix that's used to this effect. Neat! And indeed, when I get the value of this matrix initially, it's the good ol' identity matrix:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
When I check it again after MotionBuilder fudges up my texture coordinates:
16 0 0 0
0 16 0 0
0 0 1 0
0 0 0 1
How telling! But here's a slight problem: if I try to explicitly set the texture matrix before doing my own drawing, regardless of what MotionBuilder is doing, it seems like my texture coordinates have no effect and it simply samples the lower-left corner of the texture (0.0, 0.0) for every vertex.
Here's the attempted fix, placed after RenderScene in the code posted above:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
I can verify that the value of GL_TEXTURE_MATRIX is now the identity matrix, but no matter what coordinates I specify in glTexCoord2f, it's always drawn as if the coordinates for each vertex were (0.0, 0.0):
(source: awforsythe.com)
Any idea what else could be affecting how OpenGL interprets my texture coordinates?
Aha! These calls:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
...have to be made after GL_TEXTURE_2D is enabled.
...should be followed up by setting the matrix mode back to GL_MODELVIEW. It turns out, apparently, that some functions I was calling immediately after resetting the texture matrix (glViewport and/or gluPerspective?) affect the current matrix stack. So those calls were affecting the texture matrix, causing my texture coordinates to be transformed in unexpected ways.
I think I've got it now.
I have a problem when rendering cubes in OpenGL.I am drawing two cubes, one is a wire cube and is centered around the origin, while the other is offset from the origin and is solid. I have mapped some keys to rotate the objects by some degrees wrt to the origin, so the whole scene can rotate around the origin.
The problem is, when I render the scene, when the wire cube is supposed to be infront of the other solid cube, it does not display itself correctly.
In the image above, the colored cube is supposed to be behind the wire cube. i.e. the green wire cube should be on top.
Also the cube is not behaving properly.
After I rotate it a little bit around the x axis (current horizontal line).
The cube has missing faces and is not rendering correctly.
What am I doing wrong?
I have coded the following
Note that rotateX,rotateY,rotateZ are mapped to keys, and are my global rotation variables.
//The Initialize function, called once:
void Init(){
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Depth Buffer Setup // Enables Depth Testing
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glEnable(GL_LIGHTING);
}
void draw(){
//The main draw function
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity ();
gluPerspective(45, 640/480.0, .5, 100);
glMatrixMode(GL_MODELVIEW); //select the modelview matrix.
glLoadIdentity ();
gluLookAt(0,0,5,
0,0,0,
0,1,0);
glRotatef(rotateX,1,0,0);
glRotatef(rotateY,0,1,0);
glRotatef(rotateZ,0,0,1);
drawScene(); // this just draws the main axis lines,
glutWireCube(1);
glPopMatrix();
glPushMatrix();
glTranslatef(-2,1,0);
drawNiceCube();
glPopMatrix();
glutSwapBuffers();
}
The code for the drawNiceCube() is just using GL_QUADS, while the drawWireCube is built in in GLUT.
EDIT:
I have posted the full code at http://pastebin.com/p1kwPjEM, sorry if it is not well documented.
Did you also request a window with a depth buffer?
glutInitDisplayMode( ... | GLUT_DEPTH | ...);
Update:
Did you somewhere enable face culling?
glEnable(GL_CULL_FACE);
This is may be cause of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here
datenwolf solved my problem. I quote him:
"#JonathanSimbahan: Parts of your code are redundant, but something is missing: You forgot to call Init(); after creating your GLUT window, hence depth testing and all the other state never get enabled. I for one suggest you don't use Init at all and move it's code into the drawing code, where it actually belongs."