Face culling works with GL_FRONT, but not GL_BACK - opengl

I've enabled face culling with glEnable(GL_CULL_FACE), and I'm trying to cull the back faces, but whenever I do glCullFace(GL_BACK) nothing gets rendered.
If I do glCullFace(GL_FRONT) it works as expected (that is, renders the inside of my cubes, but not the outside).
I've tried to change the winding, but it doesn't seem to be that since GL_FRONT works.
What could be the reason for this?
It is rendered to a framebuffer with a depth renderbuffer enabled, if that matters. Disabling culling makes everything render as expected.
Edit
The winding used is counter-clockwise, i.e. the nearest side:
x, y, z
0, 0, 0
1, 0, 0
1, 1, 0
0, 0, 0
1, 1, 0
0, 1, 0
Here is an image of what it looks like with GL_FRONT:
(without the back of the cubes, so you can see the effect). Again, this is what I expected it to look like.
And what it looks like without culling:

I would like to share my experience since I had the same problem:
I was able to render something with glCullFace(GL_FRONT) and get the clear color only with glCullFace(GL_BACK). Turns out that OpenGL was working perfectly fine (of course) and the problem was from the shading technique I used, Deferred Shading.
I was using a Quad in normalized device coordinates to show the result of the lighting calculations and this quad was in clockwise order! So, swaping the order of this quad, everything worked.
And this expands to everyone that projects something on the screen using a quad! Either disable culling before drawing it and then enable it again (don't recommend due to API calls) or simply make sure that the quad is defined in CCW.

Related

Drawing the grid over the texture

Before diving into details, I have added opengl tag because JoGL is a Java OpenGL binding and the questions seem to be accessible for experts of both to answer.
Basically what I am trying to do is to render the grid over the texture in JoGL using GLSL. So far, my idea was to render first the texture and draw the grid on top. So what I am doing is:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, textureId);
// skipped the code where I setup the model-view matrix and where I do fill the buffers
gl2.glVertexAttribPointer(positionAttrId, 3, GL2.GL_FLOAT, false, 0, vertexBuffer.rewind());
gl2.glVertexAttribPointer(textureAttrId, 2, GL2.GL_FLOAT, false, 0, textureBuffer.rewind());
gl2.glDrawElements(GL2.GL_TRIANGLES, indices.length, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
And after I draw the grid, using:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, 0);
gl2.glDrawElements(GL2.GL_LINE_STRIP, indices, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
Without enabling the depth test, the result look pretty awesome.
But when I start updating the coordinates of the vertices (namely updating one of its axes which corresponds to height), the rendering is done in a wrong way (some things which should be in front appear behind, which makes sense without the depth test enabled). So I have enabled the depth test:
gl.glEnable(GL2.GL_DEPTH_TEST);
gl.glDepthMask(true);
An the result of the rendering is the following:
You can clearly see that the lines of the grid are blured, some of the are displayed thinner then others, etc. What I have tried to do to fix the problem is some line smoothing:
gl2.glHint(GL2.GL_LINE_SMOOTH_HINT, GL2.GL_NICEST);
gl2.glEnable(GL2.GL_LINE_SMOOTH);
The result is better, but I am not still satisfied.
QUESTION: So basically the question is how to improve further the solution, so I can see solid lines and those are displayed nicely when I start updating the vertex coordinates.
If it is required I can provide the code of Shaders (which is really simple, Vertex Shader only calculates the position based on projection, model view matrix and the vertex coords and Fragment Shader calculates the color from texture sampler).

Depth Buffer not being written to by Draw calls

I've had zero luck getting depth buffering working. Everything renders, but only in the order I layer them. When I look at the depth stencil in the VS Graphics Debugger, it is NEVER written to. It should be, from everything I can think of, but it remains all red in the debugger (which just means it's still clear).
Both the rendertarget and the depthstencil are created for me by DXUT, and I've inspected them closely enough to be fairly certain they all look good. I've walked its creation code and had MSDN open and checked every flag - but thousands have used that code before, I suspect the DXUT end is fine.
My rendertarget is a Texture2D, R8G8B8A8_UNORM_SRGB, 1280x1024, Sample count 1, Quality 0, Mip levels 1, set to bind to render target, no flags.
My depthstencil is a Texture2D, D24_UNORM_S8_UINT, 1280x1024, Sample count 1, Quality 0, Mip levels 1, set to bind as depth stencil, no other flags.
At the beginning of my render, I clear the rendertargetview. Hard to mess that up. Next I clear the DepthStencil:
pd3dImmediateContext->ClearRenderTargetView(DXUTGetD3D11RenderTargetView(), Colors::Black);
pd3dImmediateContext->ClearDepthStencilView(DXUTGetD3D11DepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
My depth stencil state for ALL draws is:
DepthEnable TRUE
DepthFunc LESS_EQUAL
DepthWriteMask ALL
StencilEnable FALSE
The DepthStencilView shows up as
Texture2D, D24_UNORM_S8_UINT, no flags, MipSlice 0
in the VS Graphics debugger. I have blending set to solid.
It does not look like a mismatch (for example, multisampling) on the rendertarget texture vs the depthstencil texture, as I've confirmed (unless those two DXGI formats are for some reason incompatible).
Everything looks right other than depth, so I'm going to presume my transformation matrices are all fine.
The only other hint I have at this point is that if I clear the depth buffer with 0.999999, everything disappears. With 1.0, everything draws (though with incorrect/absent depth).
The viewport is set to window size, then 0.0 as min depth and 1.0 as max. That's the first thing I checked.
My rasterizer state is:
FillMode SOLID
CullMode BACK
FrontCounterClockwise FALSE
DepthBias 0
DepthBiasClamp 0.000f
SlopeScaleDepthBias 0.000f
DepthClipEnable TRUE
ScissorEnable FALSE
MultisampleEnable TRUE
AntialiasedLineEnable FALSE
ForcedSampleCount 0
If there's nothing blatantly wrong above, what's the next logical thing to check? Given I'm using the DXUT framework, I'm not doing a lot of the creation plumbing on my own and it's always worked before! And in the debugger, it all looks great.
The only thing I can't check are the outputs of the vertex shader because the graphics debugger explodes and crashes.

Textured cube renders blank in DirectX

I am trying to apply textures to a cube in DirectX 9, so far I have made it to draw it with vertex colors and light as well as with a material and light, but now with the texture I get a blank cube, in my application it renders white, in PIX it renders black. I know the texture is loaded correctly as I can see it from PIX and as I have made sure I can detect any error the loading function may return (through the HResult) Edit: it was the same image on a surface, but with a breakpoint I see the texture has an address, and thus exists, whether it is right or not I cannot tell, should I be able to see it in PIX? I also know that I've got the UV's right as they appear right in PIX as I select the vertices, and they are indeed in clockwise order (at least one face is, but that should be enough to draw something).
I have removed the light code so the code is:
pd3dDevice->BeginScene();
setMatrices();
pd3dDevice->SetTexture(0, tex);
//pd3dDevice->SetRenderState(D3DRS_WRAP0, D3DWRAPCOORD_0);
pd3dDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1);
pd3dDevice->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
pd3dDevice->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE);
//pd3dDevice->SetRenderState(D3DRS_LIGHTING, FALSE);
// select which vertex format we are using
pd3dDevice->SetFVF(CUSTOMFVF);
// select the vertex buffer to display
pd3dDevice->SetStreamSource(0, v_buffer, 0, sizeof(CUSTOMVERTEX));
pd3dDevice->SetIndices(i_buffer);
// copy the vertex buffer to the back buffer
pd3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST,
0,
0,
8,
0,
12);
pd3dDevice->EndScene();
setMatrices() does the transformations, it worked well before I added lights so it should be fine. I commented out the call to the light handling function, so it shouldn't meddle with the current code.
I notice that if I remove pd3dDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1); I get a black unshaded but transformed cube, which to me seems reasonable.
The commented out lines of code, I have tried with and without them, they do not seem to effect the situation as far as I can see. I have tried a couple more render state settings but no luck - after all they were mostly educated guesses and not something that was supposed to work.
The FVF has XYZ, normal vector and UV.
Again, PIX shows the cube as black, I have it white in the application. Perhaps there's a hint there?
Thank you for any attempt to help out, much obliged.

OpenGL – Get distance to farthest face from camera instead of closest face

I'm using OpenGL (with Python bindings) to render depth maps of models saved as .obj files. The output is a numpy array with the depth to the object at each pixel.
The seemingly relevant parts of the code I'm using look like this:
glDepthFunc(GL_LESS) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
I can use this to successfully render depth images of the object.
However, what I want to do is, instead of obtaining the depth of the face closest to the camera along each ray, I want to obtain the depth of the face furthest from the camera. I want faces to be rendered regardless of in which direction they are facing – i.e. I don't want to backface-cull.
I tried to achieve this by modifying the above code like so:
glDisable(GL_CULL_FACE) # Turn off backface culling
glDepthFunc(GL_GREATER) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
However, when I do this I don't get any depth at all rendered. I suspect that this is because I am using GL_DEPTH_COMPONENT to read out the depth, however I am not sure what to change to fix this.
How can I get the depth of the faces furthest from the camera?
Switching the depth test to GL_GREATER in priciple will do the trick, you overlooked just a tiny detail: You need to initialize the depth buffer differently. By default, it will be intialized to 1.0 when clearing the depth buffer, so that GL_LESS comparisions might update it to values smaller than 1.0.
Now, you want it to work the other way around, so you must intialize it to 0.0. To do so, just add a glClearDepth(0.0f) before the glClear(... | GL_DEPTH_BUFFER_BIT).
Furthermore, you yourself mention that you don't want backface culling for that. But instead of disabling that you can also switch the face culling around using glCullFace. You probably want GL_FRONT here, as GL_BACK is the default. But disabling it will work too, of course.
Make sure you use glDepthMask(GL_TRUE); to enable depth writing to the buffer.

"Culling" for single vertices - glDrawArrays(GL_POINTS)

I have to support some legacy code which draws point clouds using the following code:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, (float*)cloudGlobal.data());
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, (float*)normals.data());
glDrawArrays(GL_POINTS, 0, (int)cloudGlobal.size());
glFinish();
This code renders all vertices regardless of the angle between normal and the "line of sight". What I need is draw only vertices whose normals are directed towards us.
For faces this would be called "culling", but I don't know how to enable this option for mere vertices. Please suggest.
You could try to use the lighting system (unless you already need it for shading). Set ambient color alpha to zero, and then simply use alpha test to discard the points with zero alpha. You will probably need to set quite high alpha in diffuse color in order to avoid half-transparent points, in case alpha blending is required to antialiass the points (to render discs instead of squares).
This assumes that the vertices have normals (but since you are talking about "facing away", I assume they do).
EDIT:
As correctly pointed out by #derhass, this will not work.
If you have cube-map textures, perhaps you can copy normal to texcoord and perform lookup of alpha from a cube-map (also in combination with the texture matrix to take camera and point cloud transformations into account).
Actually in case your normals are normalized, you can scale them using the texture matrix to [-0.49, +0.49] and then use a simple 1D (or 2D) bar texture (half white, half black - incl. alpha). Note that counterintuitively, this requires texture wrap mode to be left as default GL_REPEAT (not clamp).
If your point clouds have shape of some closed objects, you can still get similar behavior even without cube-map textures by drawing a dummy mesh with glColorMask(0, 0, 0, 0) (will only write depth) that will "cover" the points that are facing away. You can generate this mesh also as a group of quads that are placed behind the points in the opposite direction of their normal, and are only visible from the other side than the points are supposed to be visible, thus covering them.
Note that this will only lead to visual improvement (it will look like the points are culled), not performance improvement.
Just out of curiosity - what's your application and why do you need to avoid shaders?