C++ OpenGL Empty Cube with Visible Edges - c++

I am trying to create a cube. I want the cube itself to be clear (black since the background is black), but I'd like the 12 lines to be thin and white. Is the only way to do this to create lines and lay them on top of the edges? Or is there a different way to approach it?
The reason being I have to create balls bouncing around inside the box.
Maybe I should just do glBegin(GL_LINES) and not even worry about surfaces to collide against since I can just create that mathematically?
I am just creating my sides like this:
glBegin(GL_POLYGON);
glVertex3f( -0.5, -0.5, 0.5 );
glVertex3f( -0.5, 0.5, 0.5 );
glVertex3f( -0.5, 0.5, -0.5 );
glVertex3f( -0.5, -0.5, -0.5 );
glEnd();

You can just draw the 'wireframe' cube. You will see the edges but no faces. Set the fill mode to wire and render lines instead of polygons.
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // this tells it to only render lines
glBegin(GL_LINES);
// endpoints of 1 line/edge
glVertex3f( ...
glVertex3f( ...
// endpoints of second line/edge
glVertex3f(
glVertex3f(
// on up thru all 12 lines/edges
glEnd();
Now, this isn't the most efficient. You could use a line strip perhaps, or just draw 6 quads. But since this is "day one", this might be an easy start.
Eventually you'll want to not used fixed-functionality at all - it's deprecated. But this will give you an environment in which to get comfortable with matrices and lighting, etc. When you have serious gemoetry to render, you'll put it in buffers and send it off to the GPU in big chunks, letting your GLSL shaders process the data on the graphics card.
Welcome to graphics!

Maybe I should just do glBegin(GL_LINES) and not even worry about
surfaces to collide against since I can just create that
mathematically?
Correct. You already know the bounds of your cube.
Do some lines, and bounce your balls.

You could set the polygon mode (glPolygonMode, read here) to GL_LINE to achieve the same thing.
Maybe I should just do glBegin(GL_LINES) and not even worry about surfaces to collide against since I can just create that mathematically?
OpenGL isn't going to help you with collisions of any sort.
As a somewhat off topic note, consider using a more modern approach. Immediate mode drawing is effectively deprecated, even if you aren't using the newer OpenGL versions.
This is a decent place to start

Related

OpenGL: While drawing a polygon, what if the first vertex is outside of the screen space (Triangle fans)

I'm following Computer Graphics Through OpenGL, 2nd Edition by Sumanta Guha second edition, and in the page 35, it says that
Raising the first vertex of (the original) square.cpp from glVertex3f(20.0, 20.0, 0.0) to glVertex3f(20.0, 20.0, 1.5) causes the square – actually, the new figure which is not a square any more – to be clipped. If, instead, the second vertex is raised from glVertex3f(80.0, 20.0, 0.0) to glVertex3f(80.0, 20.0, 1.5), then the figure is clipped too, but very differently from when the first vertex is raised. Why? Should not the results be similar by symmetry?
Hint: OpenGL draws polygons after triangulating them as so-called triangle fans with the first vertex of the polygon the center of the fan. For example, the fan in Figure 2.16 consists of three triangles around vertex v0.
where the corresponding code looks like
glVertex3f(20.0, 20.0, 0.0);
glVertex3f(80.0, 20.0, 0.0);
glVertex3f(80.0, 80.0, 0.0);
glVertex3f(20.0, 80.0, 0.0);
if I set only the z-axis of the first vertex to 1.5f, I get such an output,
And if I set only the z-axis of the second vertex to 1.5f, I get the following output
In the latter case, I can understand why I get that output because of the clipping, but I don't understand why I get that output in the former case.
You are drawing two triangles: A,B,C and A,C,D.
If you change one z of one of the vertices, both triangles will not lie in the same plane any more.
In the first case, you change A which affects both trianges. In the second case, you are changing B which will affect only the second triangle.
Be warned that the code you are using is horribly outdated, and will not work in a modern core profile of OpenGL, where "modern" means: since a decade.

Drawing a line along z axis in opengl

I am trying to draw a line along the point 0.5,-0.5,0.0 to 0.5,-0.5,-0.5 using GL_LINES in the z direction .
Intialization of the window :
glutInitDisplayMode(GLUT_DOUBLE|GLUT_DEPTH|GLUT_RGB);
Setup in the display function.
glClearColor(1.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glColor3f(0.0, 0.0, 0.0);
However, the line is not displayed on the screen. Please help as to how to display a line going along the z direction.
You should probably share the piece of code where you actually attempt to draw the line using GL_LINES. Without it I must assume that you don't know how to do it properly. The correct way to draw the line after the setup is:
glBegin(GL_LINES);
glVertex3f(0.5f, -0.5f, 0.0f);
glVertex3f(0.5f, -0.5f, -0.5f);
glEnd();
Have you tried it that way? Also, if you use double buffering, don't forget to swap buffers after rendering, using glutSwapBuffers() when using glut or SwapBuffers(hdc) when not using it.
Edit:
Additionally you need to setup your camera correctly and move it slightly to actually see the line that you draw (it is possible that it's outside of the view area)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45,1,0.1,100); //Example values but should work in your case
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This piece of code should setup your projection correctly. Now OpenGL by default looks in the negative direction of Z axis so if you want to see your line you need to move tha camera towards the positive end of Z axis using the code (in fact the code moves the whole world, not just your camera, but it doesn't matter):
glTranslate(0.0f,0.0f,-1.0f);
Use this before glBegin and you should be good to go.

Drawing a primitive ( GL_QUADS ) on top of a 2D texture - no quad rendered, texture colour changed

I am trying to draw a 2D scene with a texture as background and then ( as the program flows and does computations ) draw different primitives on the "canvas". As a test case I wanted to draw a blue quad on the background image.
I have looked at several resources and SO questions to try get the information I need to accomplish the task ( e.g. this tutorial for first primitive rendering, SOIL "example" for texture loading ).
My understanding was that the texture will be drawn on Z=0, and quad as well. Quad would thus "cover" a portion of texture - be drawn on it, which is what I want. Instead the result of my display function is my initial texture in black/blue colour, and not my texture ( in original colour ) with a blue quad drawn on it. This is the display function code :
void display (void) {
glClearColor (0.0,0.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
// background render
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f); // window size is 1024x512
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture );
glBegin (GL_QUADS);
glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
glEnd(); // here I get the texture properly displayed in window
glDisable(GL_TEXTURE_2D);
// foreground render
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,100.0);
glVertex2d(700.0,500.0);
glEnd(); // now instead of a rendered blue quad I get my texture coloured in blue
glutSwapBuffers(); }
I have already tried with many modifications, but since I am just beginning with OpenGL and don't yet understand a lot of it, my attempts failed. For example, I tried with pushing and popping matrices before and after drawing the quad, clearing the depth buffer, changing parameters in gluPerspective etc.
How do I have to modify my code so it will render the quad properly on top of the background texture image of my 2D scene ? Being a beginner, extra explanations of the modifications ( as well as mistakes in the present code ) and principles in general will be greatly appreciated.
EDIT - after answer by Reto Koradi :
I have tried to follow the instructions, and the modified code now looks like :
// foreground render
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS); // same from here on
Now I can see the blue "quad", but it is not displayed properly, it looks something like this .
Beside that, the whole scene is flashing really quickly.
What do I have to change in my code so that quad will get displayed properly and screen won't be flashing ?
You are setting up a perspective transformation before rendering the blue quad:
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
The way gluPerspective() is defined, it sets up a transformation that looks from the origin down the negative z-axis, with the near and far values specifying the distance range that will be visible. With this transformation, z-values from -1.0 to -100.0 will be visible. Which does not include your quad at z = 0.0.
If you want to draw your quad in 2D coordinate space, the easiest solution is to not use gluPerspective() at all. Just use a glOrtho() type transformation like you did for your initial drawing.
If you want perspective, you will need a GL_MODELVIEW transformation as well. You can start with a translation in the negative z-direction, within a range of 1.0 to 100.0. You may have to adjust your coordinates for the different coordinate system as well, or use additional transformations that also translate in xy-direction, and possibly scale.
The code also has the coordinates in the wrong order for drawing the blue quad. You either have to change the draw call to GL_TRIANGLE_STRIP (recommended because it at least gets you one step closer to using features that are not deprecated), or swap the order of the last two vertices:
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,500.0);
glVertex2d(700.0,100.0);
glEnd(GL_QUADS);

What could globally affect texture coordinate values in OpenGL?

I'm writing a plugin for an application called Autodesk MotionBuilder, which has an OpenGL renderer, and I'm trying to render textured geometry into the scene. I have a window with a 3D View embedded in it, and every time my window is rendered, this is (in a nutshell) what happens:
I tell the renderer that I'm about to draw into a region with a given size
I tell the renderer to draw the MotionBuilder scene in that region
I draw some additional stuff into and/or on top of the scene
The challenge here is that I'm inheriting some arbitrary OpenGL state from MotionBuilder's renderer, which varies depending on what it's drawing and what's present in the scene. I've been dealing with this fine so far, but there's one thing I can't figure out. The way that OpenGL interprets my UV coordinates seems to change based on whatever MotionBuilder is doing behind my back.
Here's my rendering code. If there's no textured geometry in the scene, meaning MotionBuilder hasn't yet fiddled with any texture-related attributes, it works as expected.
// Tell MotionBuilder's renderer to draw the scene
RenderScene();
// Clear whatever arbitrary state MotionBuilder left for us
InitializeAttributes(); // includes glPushAttrib(GL_ALL_ATTRIB_BITS)
InitializePerspective(); // projects into the scene / loads matrices
// Enable texturing, bind to our texture, and draw a triangle into the scene
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, mTexture);
glBegin(GL_TRIANGLES);
glColor4f(1.0, 1.0, 1.0, 0.5f);
glTexCoord2f(1.0, 0.0); glVertex3f(128.0, 0.0, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f( 0.0, 128.0, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f( 0.0, 0.0, 0.0);
glEnd();
// Clean up so we don't confound MotionBuilder's initial expectations
RestoreState(); // includes glPopAttrib()
Now, if I bring in some meshes with textures, something odd happens. My texture coordinates get scaled way up. Here's a before and after:
(source: awforsythe.com)
As you can see from the close-up on the right, when MotionBuilder is asked to render a texture whose file it can't find, it instead loads this small question mark texture and tiles it across the geometry. My only hypothesis is that MotionBuilder is changing some global texture coordinate scalar so that, for example, glTexCoord2f(0.5, 1.0) will instead be interpreted as if it were (50.0, 100.0). Is there such a feature in OpenGL? Any idea what I need to modify in order to preserve my texture coordinates as I've entered them?
Since typing the above and after doing a bit of research, I have discovered that there's a GL_TEXTURE matrix that's used to this effect. Neat! And indeed, when I get the value of this matrix initially, it's the good ol' identity matrix:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
When I check it again after MotionBuilder fudges up my texture coordinates:
16 0 0 0
0 16 0 0
0 0 1 0
0 0 0 1
How telling! But here's a slight problem: if I try to explicitly set the texture matrix before doing my own drawing, regardless of what MotionBuilder is doing, it seems like my texture coordinates have no effect and it simply samples the lower-left corner of the texture (0.0, 0.0) for every vertex.
Here's the attempted fix, placed after RenderScene in the code posted above:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
I can verify that the value of GL_TEXTURE_MATRIX is now the identity matrix, but no matter what coordinates I specify in glTexCoord2f, it's always drawn as if the coordinates for each vertex were (0.0, 0.0):
(source: awforsythe.com)
Any idea what else could be affecting how OpenGL interprets my texture coordinates?
Aha! These calls:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
...have to be made after GL_TEXTURE_2D is enabled.
...should be followed up by setting the matrix mode back to GL_MODELVIEW. It turns out, apparently, that some functions I was calling immediately after resetting the texture matrix (glViewport and/or gluPerspective?) affect the current matrix stack. So those calls were affecting the texture matrix, causing my texture coordinates to be transformed in unexpected ways.
I think I've got it now.

View the inside of a cylinder

If I draw a gluCylinder with a gluDisk on top. Without culling enabled, I get the desired cylinder with lid effect. However, if I enable culling, the disk (aka lid) disappears. Why is that? This is the main question. In addition, with culling enabled the back faces of the cylinder are also not drawn. I get why this is happening but I would still like to see the inside of the cylinder drawn. The code is:
glPushMatrix()
quadratic = gluNewQuadric()
gluQuadricNormals(quadratic, GLU_SMOOTH)
gluQuadricTexture(quadratic, GL_TRUE)
glRotatef(90, 1, 0, 0)
glTranslate(0, 0, -3*sz)
gluCylinder(quadratic, 0.75*sz, 0.75*sz, 3.0*sz, 32, 32)
gluDisk(quadratic, 0.0, 0.75*sz, 32, 32)
glPopMatrix()
Your disk is facing in the wrong direction (wrong winding). Therefore, it is culled. You can try to reverse its orientation using gluQuadricOrientation, this should do the trick. For more information, refer to the OpenGL spec for gluDisk and glCullFace.
A disk is just a plane without any thickness. So one side is front and the other is back and with culling enabled one of those gets culled away. You are probably just seeing the culled away side. If this is not the side you want to see, just rotate the disk around. Nothing fancier to it. So just wrap it into a:
glPushMatrix();
glRotatef(180.0f, 0.0f, 1.0f, 0.0f);
gluDisk(quadratic, 0.0, 0.75*sz, 32, 32);
glPopMatrix();
Or, like kroneml suggests, change its triangles' orientations. Decide for yourself which one is more conceptually correct in your situation.