I want to draw two separate objects so that I can perform a query while drawing the second object. The drawing code will look something like this:
glDrawElements(GL_TRIANGLES,...); // draw first object
glBeginQuery(GL_SAMPLES_PASSED, queries[0]);
glDrawElements(GL_TRIANGLES,...); // draw second object
glEndQuery(GL_SAMPLES_PASSED);
glGetQueryObjectiv(queries[0], GL_QUERY_RESULT, &result);
return restult;
Most OpenGL tutorials don't go beyond a single glDraw*() command. As I understand it from this site I need two Vertex Array Objects, but the site doesn't explain how to set the Buffer Data for the separate objects. For the sake of simplicity, let's just say I want the objects to be a single triangle each:
Triangle1:
vertex1: -0.5, 0.0, 0.0
vertex2: -0.5, 0.5, 0.0
vertex3: 0.0, 0.0, 0.0
Triangle2:
vertex1: 0.0, 0.0, 0.0
vertex2: 0.5, 0.5, 0.0
vertex3: 0.5, 0.0, 0.0
Can someone show me how to setup the Vertex Array Objects, Vertex Buffer Objects, and Element Array Buffers to perform this query in C++ and OpenGL 3.2?
Your code for drawing geometry misses two essential steps:
creation of the GL_ARRAY_BUFFER (glGenBuffers, glBindBuffer, glBufferData)
association of drawing state machine with the array buffer (calls to gl…Pointer functions)
It is those which allow drawing multiple meshes.
A couple of suggestions:
You can draw one collection of triangles that aren't connected to each other and appear to be two objects visually.
You can also create two separate OpenGL contexts. One context for each of the objects you want to draw. When drawing each object make the associated context the 'current' context and make your draw calls.
Related
My code Currently looks like this :
glViewport (0, 0, this->w(), this->h());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-1.0, 1.0, -1.0, 1.0, 1.5, 20.0);
//glTranslated (m_fXmovement, 0.0, m_fZmovement - 5);
//glRotated (m_fYangleView, 1.0, 0.0, 0.0);
//glRotated (m_fXangleView, 0.0, 1.0, 0.0);
///// Model View \\\\\
glMatrixMode(GL_MODELVIEW);
glTranslated (m_fXmovement, 0.0, m_fZmovement - 5 );
glRotated (m_fYangleView, 1.0, 0.0, 0.0);
glRotated (m_fXangleView, 0.0, 1.0, 0.0);
DrawWaveFrontObject (m_pDataObjectMedia);
glPushMatrix();
glTranslated (0.0, -3.0, 0.0);
DrawArea();
glPopMatrix();
DrawClickAnimation();
glLoadIdentity();
First I had the movement part in GL_PROJECTION and all was running fine until I was working with fog.... It felt like the Camera isn't moving, it felt more like an additional camera pointing to that camera....
Then I accidentally copied the movement parts to the GL_MODELVIEW and the fog was acting as I wanted it to act..... all was fine accepting the click animation wasn't in relation to the area anymore, now the animation moved with my ego perspective.... and I don't really get it what kind of drawing I have to put in which of these two VIEW's. Could anyone give me examples or explanations according to my code or a hint what I could improve in my styl?
Quote from opengl.org forum:
The projection matrix is used to create your viewing volume. Imagine a
scene in the real world. You don't really see everything around you,
only what your eyes allow you to see. If you're a fish for example you
see things a bit broader. So when we say that we set up the projection
matrix we mean that we set up what we want to see from the scene that
we create. I mean you can draw objects anywhere in your world. If they
are not inside the view volume you won't see anything. When you create
the view volume imagine that you create 6 clipping planes that define
your field of view.
As for the modelview matrix, it is used to make various
transformations to the models (objects) in your world. Like this you
only have to define your object once and then translate it or rotate
it or scale it.
You would use the projection matrix before drawing the objects in your
scene to set the view volume. Then you draw your object and change the
modelview matrix accordingly. Of course you can change your matrix
midway of drawing your models if for example you want to draw a scene
and then draw some text (which with some methods you can work easier
in orthographic projection) then change back to modelview matrix.
As for the name modelview it has to do with the duality of modeling
and viewing transformations. If you draw the camera 5 units back, or
move the object 5 units forwards it is essentially the same.
First of all, I suggest that you try to abandon the fixed-function pipeline (glTranslate etc) since it's been deprecated for like 10 years now. Look here for a more modern tutorial if you're interested.
As for your problem, you can imagine the meaning of the two matrices like this: The projection matrix essentially captures properties intrinsic to the camera itself, like how its field of view is shaped.
On the other hand, the modelview matrix is composed of two parts, the model matrix and the view matrix. The model part is for transforming from object space (relative to an object itself) to world space. Then, the view part translates from there to the eye space, in which the camera sits at the origin and points down the (negative?) z axis. Together, the modelview matrix essentially states how objects are to be positioned relative to the camera.
For further information, this resource gives a detailed description of graphics transformations in the context of OpenGL.
[Jan, 2017] Edit: Pages from the first link seem to be unable to access these days, so there is another link to the same content from their archive.
Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
I call
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
where global_ambient is 0.0, 0.0, 0.0, 1.0 and I have material parameters defined, that is glmaterial is never called. Would the global ambient lighting still work as in I will not be able to see the polygon? Or would I need to define material parameters.
Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
If that's true, then the lighting state is completely irrelevant. Fixed-function OpenGL lighting is per-vertex. You're not sending vertices; you're sending pixel data.
For example, given two cubes with similar vertices, e.g.,
float pVerts[] =
{
0.0, 0.0, 0.0,
1.0, 0.0, 0.0,
...
};
glGenBuffer(1, &mVertexBuffer);
glBindBuffer(...);
glBufferData(...);
Can I just cache this set of vertices out for later usage? Or, in other words, if I wanted a second cube (with the exact same vertex data), do I need to generate another vertex buffer?
And with shaders, does the same apply? Can I use the same program for drawing these cubes?
You can use the same vertex buffer to draw as many objects as you want (shaders or not). If you want to draw a second object, just change the model matrix and draw it again.
Same for shaders, you can use the same shader to draw as many objects as you want. Just bind the shader and then fire off as many draw calls as you need.
I'm writing a plugin for an application called Autodesk MotionBuilder, which has an OpenGL renderer, and I'm trying to render textured geometry into the scene. I have a window with a 3D View embedded in it, and every time my window is rendered, this is (in a nutshell) what happens:
I tell the renderer that I'm about to draw into a region with a given size
I tell the renderer to draw the MotionBuilder scene in that region
I draw some additional stuff into and/or on top of the scene
The challenge here is that I'm inheriting some arbitrary OpenGL state from MotionBuilder's renderer, which varies depending on what it's drawing and what's present in the scene. I've been dealing with this fine so far, but there's one thing I can't figure out. The way that OpenGL interprets my UV coordinates seems to change based on whatever MotionBuilder is doing behind my back.
Here's my rendering code. If there's no textured geometry in the scene, meaning MotionBuilder hasn't yet fiddled with any texture-related attributes, it works as expected.
// Tell MotionBuilder's renderer to draw the scene
RenderScene();
// Clear whatever arbitrary state MotionBuilder left for us
InitializeAttributes(); // includes glPushAttrib(GL_ALL_ATTRIB_BITS)
InitializePerspective(); // projects into the scene / loads matrices
// Enable texturing, bind to our texture, and draw a triangle into the scene
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, mTexture);
glBegin(GL_TRIANGLES);
glColor4f(1.0, 1.0, 1.0, 0.5f);
glTexCoord2f(1.0, 0.0); glVertex3f(128.0, 0.0, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f( 0.0, 128.0, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f( 0.0, 0.0, 0.0);
glEnd();
// Clean up so we don't confound MotionBuilder's initial expectations
RestoreState(); // includes glPopAttrib()
Now, if I bring in some meshes with textures, something odd happens. My texture coordinates get scaled way up. Here's a before and after:
(source: awforsythe.com)
As you can see from the close-up on the right, when MotionBuilder is asked to render a texture whose file it can't find, it instead loads this small question mark texture and tiles it across the geometry. My only hypothesis is that MotionBuilder is changing some global texture coordinate scalar so that, for example, glTexCoord2f(0.5, 1.0) will instead be interpreted as if it were (50.0, 100.0). Is there such a feature in OpenGL? Any idea what I need to modify in order to preserve my texture coordinates as I've entered them?
Since typing the above and after doing a bit of research, I have discovered that there's a GL_TEXTURE matrix that's used to this effect. Neat! And indeed, when I get the value of this matrix initially, it's the good ol' identity matrix:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
When I check it again after MotionBuilder fudges up my texture coordinates:
16 0 0 0
0 16 0 0
0 0 1 0
0 0 0 1
How telling! But here's a slight problem: if I try to explicitly set the texture matrix before doing my own drawing, regardless of what MotionBuilder is doing, it seems like my texture coordinates have no effect and it simply samples the lower-left corner of the texture (0.0, 0.0) for every vertex.
Here's the attempted fix, placed after RenderScene in the code posted above:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
I can verify that the value of GL_TEXTURE_MATRIX is now the identity matrix, but no matter what coordinates I specify in glTexCoord2f, it's always drawn as if the coordinates for each vertex were (0.0, 0.0):
(source: awforsythe.com)
Any idea what else could be affecting how OpenGL interprets my texture coordinates?
Aha! These calls:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
...have to be made after GL_TEXTURE_2D is enabled.
...should be followed up by setting the matrix mode back to GL_MODELVIEW. It turns out, apparently, that some functions I was calling immediately after resetting the texture matrix (glViewport and/or gluPerspective?) affect the current matrix stack. So those calls were affecting the texture matrix, causing my texture coordinates to be transformed in unexpected ways.
I think I've got it now.
I have rendered a very rough model of a molecule that consists of 7 helices and would like to ask if there is anyway possible to allow the helices themselves to tilt (rotate) in certain ways so as to interact with one another. For clarity, I insert an image of my program output (although for an orthographic projection, so it appears as the projection of a 3D helix onto a 2D plane).
I have included the code for rending a single helix (all others are the same).
Would it be useful to store the geometry of my objects in vertex arrays instead of rendering them each time separately for the 7 different colors? (Each helix consists of 36,000 vertices and I am concerned that the arrays might get large enough to cause serious performance issues?)
I understand the matrix stack is the data structure for performing multiple consecutive individual transformations on particular objects, but I not sure how exactly to specify so that an entire one of my helices can tilt? (glRotatef does not actually tilt the helices for some reason)
/*HELIX RENDERING*/
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(0.0, 100.0, -5.0); //Move Position
glRotatef(90.0, 0.0, 0.0, 0.0);
glBegin(GL_LINE_STRIP);
for(theta = 0.0; theta <= 360.0; theta += 0.01) {
x = r*(cosf(theta));
y = r*(sinf(theta));
z = c*theta;
glVertex3f(x,y,z);
glColor3f(1.0, 1.0, 0.0);
}
glEnd();
glPopMatrix();
Would it be useful to store the
geometry of my objects in vertex
arrays instead of rendering them each
time separately for the 7 different
colors? (Each helix consists of 36,000
vertices and I am concerned that the
arrays might get large enough to cause
serious performance issues?
Drawing geometry using vertex arrays always makes sense. And in your case, the overhead caused by those 36k * (5 floating pointer operations + 2 function calls) will seriously affect your performance. Using vertex arrays will give you a 100× performance gain easily, just because you're not recreating the data each and every call.
You may also be interested in not using lines, since you can't shade those in any useful way. I'd render those helices by creating basic building blocks, created from ellipses extruded along the helical. One basic block for the intra helix and two caps. The chirality is easily changed by mirroring along one axis. With modern OpenGL implementations you can implement instancing on the intra-helix-element to further increase performance.
If you want to flex the helix, I'd do this using skeletal skinning.