I came across an issue about occlusion query. In my case, the query result seems always larger than it is supposed to be. e.g. i use a 1024*1024 resolution for rendering, but the query result of an object in the scene is 2085029(>1024*1024).
The query method used is from GPU Gems Chapter 29
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
glBeginQuery(GL_SAMPLES_PASSED, occlusionQuery[0]);
mesh->Render();
glEndQuery(GL_SAMPLES_PASSED);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
glGetQueryObjectuiv(occlusionQuery[0], GL_QUERY_RESULT, &screenFragmentCount[0]);
Can anyone help?
The occlusion query does not tell you how many pixels passed the tests. It tells you how many fragments passed the tests (or more accurately, "samples").
If you draw one triangle, and then draw a triangle in front of it, the occlusion query counts fragments from both, even though the earlier triangles are occluded by the later ones. The triangles are rasterized in-order. So the later triangle that covers the earlier one has not been rasterized by the time the occlusion query count is bumped. Self-occlusion like this is only counted if triangles are rendered from nearest to farthest.
Related
I've been trying to implement something like the game Antichamber (to be more precisly this trick shown below) for the past week:
Here's a video of what I'm hoping to achieve (even though it was done with the Unreal Engine 4; I'm not using that): https://www.youtube.com/watch?v=Of3JcoWrMZs
I looked up the best way to do this and I found out about the stencil buffer. Between this article and this code (the "drawPortals()" function) I found online I managed to almost implement it.
It works nicely with one portal to another room (not a crossable portal, meaning you can't walk through it and be teleported in the other room). In my example I'm drawing a portal to a simple squared room with a sphere in it; behind the portal there is another sphere, that I used to check whether the depth buffer was working correctly and drawing it behind the portal:
The problems arise when I add another portal that is near this one. In that case I manage to display the other portal correctly (the lighting is off, but the sphere on the right is of a different color to show that it's another sphere):
But if I turn the camera so that the first portal has to be drawn over the second one, then the depth of the first one becomes wrong, and the second portal gets drawn over the first one, like this:
while it should be something like this:
So, this is the problem. I'm probably doing something wrong with the depth buffer, but I can't find what.
My code for the rendering part is pretty much this:
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_STENCIL_TEST);
// First portal
glPushMatrix();
// Disable writing to the color and depht buffer; disable depth testing
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
// Make sure that the stencil always fails
glStencilFunc(GL_NEVER, 1, 0xFF);
// On fail, put 1 on the buffer
glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP);
// Enable writing to the stencil buffer
glStencilMask(0xFF);
// Clean the buffer
glClear(GL_STENCIL_BUFFER_BIT);
// Finally draw the portal's frame, so that it will have only 1s in the stencil buffer; the frame is basically the square you can see in the pictures
portalFrameObj1.Draw();
/* Now I compute the position of the camera so that it will be positioned at the portal's room; the computation is correct, so I'm skipping it */
// I'm going to render the portal's room from the new perspective, so I'm going to need the depth and color buffers again
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glClear(GL_DEPTH_BUFFER_BIT);
// Disable writing to the stencil buffer and enable drawing only where the stencil values are 1s (so only on the portal frame previously rendered)
glStencilMask(0x00);
glStencilFunc(GL_EQUAL, 1, 0xFF);
// Draw the room from this perspective
portalRoomObj1.Draw();
glPopMatrix();
// Now the second portal; the procedure is the same, so I'm skipping the comments
glPushMatrix();
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glStencilFunc(GL_NEVER, 1, 0xFF);
glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP);
glStencilMask(0xFF);
glClear(GL_STENCIL_BUFFER_BIT);
portalFrameObj2.Draw();
/* New camera perspective computation */
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glClear(GL_DEPTH_BUFFER_BIT);
glStencilMask(0x00);
glStencilFunc(GL_EQUAL, 1, 0xFF);
portalRoomObj2.Draw();
glPopMatrix();
// Finally, I have to draw the portals' frames once again but this time on the depth buffer, so that they won't get drawn over; first off, disable the stencil buffer
glDisable(GL_STENCIL_TEST);
// Disable the color buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glClear(GL_DEPTH_BUFFER_BIT);
// Draw portals' frames
portalFrameObj1.Draw();
portalFrameObj2.Draw();
// Enable the color buffer again
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
/* Here I draw the rest of the scene */
UPDATE
I managed to find out what the problem is, but I'm still not able to solve it. It isn't related to the depth buffer actually, but to the stencil.
Basically, the way I'm drawing the first portal is this:
1) Fill the portal frame's bits in the stencil buffer with 1s; outside the portal there are only 0s
2) Draw the portal room where the stencil has 1s (so that it gets drawn onto the frame's portal
And I repeat this for the second portal.
For the first portal, I get at step 1 something like this then (pardon the stupid drawings, I'm lazy):
Then after step 2:
Then I start with the second portal:
But now, between step 1 and 2, I tell the stencil to draw only where the bits are 1s; since the buffer is now cleared, I lost track of the 1s of the first portal, so if part of the second portal's frame is behind the previous frame, the 1s of the second frames will still be drawn with the second portal's room, regardless of the depth buffer. Kind of like in this image:
I don't know if I managed to explain it right...
It's been a while now. Since there aren't going to be anwsers (probably) I'm writing about the workaround I'm using now.
I tried solving the problem using different stencil functions and not empting the stencil buffer when I render a new portal; the idea was to exploit the fact that I knew by looking at the stencil buffer where the previous portals where being rendered.
In the end, I couldn't manage to do it; in the examples I posted in my original question, one of the 2 portals always obscured parts of the other.
So, I decided to do something simplier: I check if a portal is visible, and then I draw it exactly like in the code I posted in the question (so emptying the stencil buffer each time).
In pseudo code, I do this:
for(var i = 0; i < PORTALS_NUMBER; i++)
{
// I get the normal to the portal's frame and its position
var normal = mPortalFramesNormals[i];
var framePos = mPortalFrames[i].GetPosition();
// I compute the scalar product between the normal and the direction vector between the camera's position and the frame's position
var dotProduct = normal * (currentCameraPosition - framePos);
// If the dot product is 0 or positive, the portal is visible
if(dotProduct >= 0)
{
// I render the portal
DrawPortal(mPortalFrames[i], mPortalRooms[i]);
}
}
glDisable(GL_STENCIL_TEST);
// Now I draw the portals' frames in the depth buffer, so they don't get overwritten by other objects in the scene
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glClear(GL_DEPTH_BUFFER_BIT);
for(var i = 0; i < PORTALS_NUMBER; i++)
{
mPortalFrames[i].Draw();
}
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
In my case, where I need to display 4 portals in total positioned as a cube (so I know that at most 2 portals are directly visible at a time) and only one face of the frame displays the portal while the other one donesn't, it works perfectly.
I know that there is probably a more elegant way to do this by using only the stencil buffer, but right now this works for me.
If anyone knows a better way to do this, I'm still interested in knowing!
EDIT: I've found out about the other versions of the stencil functions (i.e., glStencilFuncSeparate, glStencilOpSeparate, glStencilMaskSeparate) that allow to do different things with back and front faces. That seems like what I would need to solve the problems I described, but I won't be able to try this anytime soon. I'm just pointing it out in case someone who has my same problem wanders here.
I'm a few years late, but this question still shows up when searching for this problem, and I've had to fix it as well, so I figured I'll drop my solution for anyone else who comes by.
1. Don't clear the depth buffer (except between frames)
The main issue with your solution is that you're periodically clearing the entire depth buffer. And every time you do, you're getting rid of any information you might have to figure out which portals to draw and which are obscured. The only depth data you actually need to clobber is on the pixels where your stencils are set up; you can keep the rest.
Just before you draw the objects "through" the portal, set up your depth buffer like so:
// first, make sure you're writing to the depth buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_TRUE);
glStencilMask(0x00);
// you can have the stencil enabled for this step, if you did anything fancy with it earlier (like depth testing)
glStencilFunc(GL_EQUAL, 1, 0xFF);
// the depth test has to be enabled to write to the depth mask. But don't worry; it'll always pass.
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_ALWAYS);
// set the depth range so we only draw on the far plane, leaving a "hole" for later
glDepthRange(1, 1);
// now draw the portal object again
portalFrameObj1.Draw();
// and reset what you changed so the rest of the code works
glDepthFunc(GL_LESS);
glDepthRange(0, 1);
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
Now, when you draw the objects "through" the portal, they'll show up where they're needed, but the rest of the screen will still have the depth information it used to! Everybody wins!
Of course, don't forget to overwrite the depth information with a third portalFrameObj1.Draw() like you've been doing. That'll help with the next part:
2. Check if your portal is obscured before setting up the stencil
At the start of the code, when you set up your stencil, you disable depth testing. You don't need to!
glStencilOp has three arguments:
sfail applies when the stencil test fails.
dpfail applies when the stencil test succeeds, but the depth test fails.
dppass applies when both the stencil and depth test succeed (or the stencil succeeds and the depth test is disabled or has no data).
Leave depth testing on. Set glStencilFunc(GL_ALWAYS, 0, 0xFF); rather than GL_NEVER, so the stencil test succeeds, and use dppass to set your stencil instead of sfail: glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
Now, you could also play around with glStencilFuncSeparate to optimize things a little, but you don't need it to get portals to obscure each other.
Attempting to switch drawing mode to GL_LINE, GL_LINE_STRIP or GL_LINE_LOOP when your cube's vertex data is constructed mainly for use with GL_TRIANGLES presents some interesting results but none that provide a good wireframe representation of the cube.
Is there a way to construct the cube's vertex and index data so that simply toggling the draw mode between GL_LINES/GL_LINE_STRIP/GL_LINE_LOOP and GL_TRIANGLES provides nice results? Or is the only way to get a good wireframe to re-create the vertices specifically for use with one of the line modes?
The most practical approach is most likely the simplest one: Use separate index arrays for line and triangle rendering. There is certainly no need to replicate the vertex attributes, but drawing entirely different primitive types with the same indices sounds highly problematic.
To implement this, you could use two different index (GL_ELEMENT_ARRAY_BUFFER) buffers. Or, more elegantly IMHO, use a single buffer, and store both sets of indices in it. Say you need triIdxCount indices for triangle rendering, and lineIdxCount for line rendering. You can then set up you index buffer with:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuf);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
(triIdxCount + lineIdxCount) * sizeof(GLushort), 0,
GL_STATIC_DRAW);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
0, triIdxCount * sizeof(GLushort), triIdxArray);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
triIdxCount * sizeof(GLushort), lineIdxCount * sizeof(GLushort),
lineIdxArray);
Then, when you're ready to draw, set up all your state, including the index buffer binding (ideally using a VAO for all of the state setup) and then render conditionally:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuf);
if (renderTri) {
glDrawElements(GL_TRIANGLES, triIndexCount, GL_UNSIGNED_SHORT, 0);
} else {
glDrawElements(GL_LINES, lineIdxCount, GL_UNSIGNED_SHORT,
triIndexCount * sizeof(GLushort));
}
From a memory usage point of view, having two sets of indices is a moderate amount of overhead. The actual vertex attribute data is normally much bigger than the index data, and the key point here is that the attribute data is not replicated.
If you don't strictly want to render lines, but just have a requirement for wireframe types of rendering, there are other options. There is for example an elegant approach (never implemented it myself, but it looks clever) where you only draw pixels close to the boundary of polygons, and discard the interior pixels in the fragment shader based on the distance to the polygon edge. This question (where I contributed an answer) elaborates on the approach: Wireframe shader - Issue with Barycentric coordinates when using shared vertices.
I am writing a program to plot the distribution of a stream of live noisy data. The plots look something like
The scene is lit with 3 lights - 2 diffuse and 1 ambient - to allow modeling to be revealed once filtering is applied to the data
Currently vertical scaling and vertex colour assignment is done by my code before sending the vertices to the GPU using:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(c_vertex), &(vertex_store[0][0].x));
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3, GL_FLOAT, sizeof(c_vertex),&(vertex_store[0][0].r));
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(c_vertex),&(vertex_store[0][0].nx));
glDrawElements(GL_TRIANGLES, (max_bins-1)*(max_bins-1)*2*3, GL_UNSIGNED_INT, vertex_order);
The use of older functions is so that I can let the fixed pipeline do the lighting calculations with out me writing a shader [something I have not done to the depth needed to do lighting with 3 sources]
To speed up processing I would like to send unscaled data to the GPU and apply a matrix with X and Z scale of 1 and Y scale of the appropriate value to make the peaks reach to +1 on the y axis. After this I would then like the GPU to select the colour for the vertex depending on its post scaling Y value from a look-up table which I assume would be a texture map.
Now I know I can do the last paragraph IF I write my own shaders - but that the necessitates writing code for lighting which I want to avoid doing. Is there anyway of doing this using the buffers in the drawing code above?
After
this I would then like the GPU to select the colour for the vertex
depending on its post scaling Y value from a look-up table which I
assume would be a texture map.
You really should write your own shaders for that. Writing a shader for 3 light sources isn't more complicated as writing one for just one and making a loop around it.
However, I think what you asked for could still be done with the fixed function pipeline. You can use a 1D texture for the colors, enable texturing and the automatic texture coordinate generation, the latter via the glTexGen() family of functions.
In your specific case, the best appraoch seems to set up a GL_OBJECT_LINEAR mapping for s (the first and only texture coordinate that you would need for a 1D texture):
glEnable(GL_TEXTURE_GEN_S);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
What the GL now will do is calcualte s as a function of your input vertex cooridnates (x,y,z,w) such that:
s=a*x + b*y + c*z + d*w
where a,b,c and d are just some coefficients you can freely choose. I'm assuming your original vertices just need to be scaled along y direction by a scaling factor V, so you can just set b=V and all other to zero:
GLfloat coeffs[4]={0.0f, V, 0.0f, 0.0f);
glTexGenfv(GL_S, GL_OBJECT_PLANE, coeffs);
Then, you just have to enable texture mapping and provide a texture to get the desired lookat.
im playing with openGL and im trying to get rid of blue marked triangles. I use for it this code:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_CULL_FACE);
And yes I use
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
in my main loop. I've read the problem can be projection matrix. I use these values:
ProjectionMatrix = glm::perspective(45.5f, 4.0f / 3.0f, 0.1f, 100.0f);
I was trying to change the near and far value but its still the same. I was also trying change parameter of glDepthFunc but it also didnt help me.
So, any ideas?? Thanks a lot
This is perfectly valid behavior because you are not using filled polygons. Face culling still behaves the way you would expect when you use glPolygonMode (...), but the depth test does not.
The depth test and writes only apply to fragments during rasterization, not primitives during clipping / primitive assembly. In a nutshell, this means that anywhere that is not filled will not be affected by the depth of the primitive (e.g. triangle). So the only place the depth test applies in this example are the very few points on screen where two lines overlap.
If you want to prevent the wireframe overlay from drawing lines for triangles that would not ordinarily be visible, you will need to draw twice:
Pass 1
Set Polygon Mode to FILL
Disable color writes: glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE)
Draw primitives
Pass 2
Set Polygon Mode to LINE
Enable color writes: glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE)
Draw primitives
This will work because the first pass fills the depth buffer using filled (solid) primitives but does not write to the color buffer (thus everything is still transparent). The second pass draws lines at the edges of each primitive and these lines will fail a depth test if the interior (unfilled region) of another triangle covers it.
NOTE: You should use a depth test that includes equality (e.g. GL_LEQAUL) for the behavior discussed above to function correctly. So do not use GL_LESS.
I've finnaly reached the point that I can add some color to my vertices. But now I want to improve my FPS rate. Here is the current situation. I have a large number of vertices (~200000) , and each of them can be in one of ~150 classes. Each class is differenced by it`s color. I'm currently drawing my vertices like:
glEnableClientState(GL_VERTEX_ARRAY)
glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferVertices)
glVertexPointer(3, GL_FLOAT, 0, None)
glEnableClientState(GL_NORMAL_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferNormals)
glNormalPointer(GL_FLOAT, 0, None)
glEnableClientState(GL_COLOR_ARRAY)
glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferColors)
glColorPointer(3, GL_FLOAT, 0, None)
glDrawArrays( GL_POINTS, 0, len(self.vertices) / 3 )
Everything works fine and the FPS is ~60. All my buffers are generated only at init time. But now the colors that represent each class will change rapidly, once about every millisecond. With that the colors will also change so I would have to recreate my color buffer for each draw, and for 200000 vertices I have a feeling the FPS will be close to 0. The fix I'm trying to implement is to keep a fixed color buffer, but instead of the actual colors, they would retain a pointer to the class it`s represented by. That way, only the 200 colors of the classes will change. Problem is I don't know how this could be implemented in OpenGL. Is this doable ? Any pointers in how to do this?
Instead of colors the classes should set a 1D texture coordinate, then instead of changing that huge dataset you have to replace only parts of the texture. To avoid texture sampling interpolation set the texture into GL_NEAREST filtering mode, and sample the texture in the vertex shader per vertex (i.e. no per fragment texture sampling as you'd do usually).
I think you should seriously take a look at shaders. You shouldn't need to recreate your buffer every time. http://en.wikipedia.org/wiki/GLSL. The shaders will use your buffer to perform all kinds of operations in color and geometry.