I have a few objects on the scene and even if I specify that the object A have y= 10 (the highest object), from the TOP camera I can see the bottom objects through the object A. Here is an image from my scene.
And only today I found an interesting property that draw order of models matters, I may be wrong. Here is another image where I change the draw order of "ship1", attention: "ship1" is way bellow my scene, if I do ship1.draw(); first, the ship disappears (correct), but if I do ship1.draw(); in last, he appears on top (incorrect).
Video: Opengl Depth Problem video
Q1) Does the draw order always matter?
Q2) How do I fix this, should I change draw order every time I change camera position?
Edit: I also compared my class of Perspective Projection with the glm library, just to be sure that it´s not the problem with my projection Matrix. Everything is correct.
Edit1: I have my project on git: Arkanoid git repository (windows, project is ready to run at any computer with VS installed)
Edit2: I don´t use normals or texture. Just vertices and indices.
Edit3: Is there a problem, if every object on the scene uses(share) vertices from the same file ?
Edit4: I also changed my Perspective Projection values. I had near plane at 0.0f, now I have near=20.0f and far=500.0f, angle=60º. But nothing changes, view does but the depth not. =/
Edit5: Here is my Vertex and Fragment shaders.
Edit6: contact me any time, I am here all day, so ask me anything. At the moment I am rewriting all project from zero. I have two cubes which renders well, one in front of another. Already added mine class for: camera, projections, handler for shaders. Moving to Class which creates and draws objects.
// Vertex shader
in vec4 in_Position;
out vec4 color;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
void main(void)
{
color = in_Position;
gl_Position = Projection * View * Model * in_Position;
}
// Fragment shader
#version 330 core
in vec4 color;
out vec4 out_Color;
void main(void)
{
out_Color = color;
}
Some code:
void setupOpenGL() {
std::cerr << "CONTEXT: OpenGL v" << glGetString(GL_VERSION) << std::endl;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(GL_TRUE);
glDepthRange(0.0, 1.0);
glClearDepth(1.0);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
}
void display()
{
++FrameCount;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderScene();
glutSwapBuffers();
}
void renderScene()
{
wallNorth.draw(shader);
obstacle1.draw(shader);
wallEast.draw(shader);
wallWest.draw(shader);
ship1.draw(shader);
plane.draw(shader);
}
I have cloned the repository you have linked to see if the issue was located somewhere else. In your most recent version the Object3D::draw function looks like this:
glBindVertexArray(this->vaoID);
glUseProgram(shader.getProgramID());
glUniformMatrix4fv(this->currentshader.getUniformID_Model(), 1, GL_TRUE, this->currentMatrix.getMatrix()); // PPmat é matriz identidade
glDrawElements(GL_TRIANGLES, 40, GL_UNSIGNED_INT, (GLvoid*)0);
glBindVertexArray(0);
glUseProgram(0);
glClear( GL_DEPTH_BUFFER_BIT); <<< clears the current depth buffer.
The last line clears the depth buffer after each object that is drawn, meaning that the next object drawn is not occluded properly. You should only clear the depth buffer once every frame.
How come when i manually change the alpha value in array, being passed to shader, the result is the same for both 0.0f and 1.0f?
I was expecting the object to be drawn with some level of transparency, depending on alpha value.
I'm not using any textures. I always see my red object against a black background.
accessing glsl variable from java ..
float[] color = {1.0f, 0.0f, 0.0f, 1.0f};
gl2.glGetUniformLocation(shaderProgram, "vColor");
gl2.glUniform4fv(mColorHandle, 1, color, 0);
glsl, fragment shader ..
#version 120
uniform vec4 vColor;
void main() {
gl_FragColor = vColor;
gl_FragColor.a = 0.0; // does not make object transparent
// gl_FragColor.a = 1.0; // does not make object transparent
}
Needed to enable blending ..
gl2.glEnable(GL.GL_BLEND);
gl2.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
I'm working on a deferred shading project and I've got a problem with blending all the lights into the final render.
Basically I'm just looping over each light and then rendering the fullscreen quad with my shader that does lighting calculations but the final result is just a pure white screen. If I disable blending, I can see the scene fine but it will be lit by one light.
void Render()
{
FirstPass();
SecondPass();
}
void FirstPass()
{
glDisable(GL_BLEND);
glEnable(GL_DEPTH);
glDepthMask(GL_TRUE);
renderTarget->BindFrameBuffer();
gbufferShader->Bind();
glViewport(0, 0, renderTarget->Width(), renderTarget->Height());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (int i = 0; i < meshes.size(); ++i)
{
// set uniforms and render mesh
}
renderTarget->UnbindFrameBuffer();
}
EDIT: I'm not rendering light volumes/geometry, i'm just calculating final pixel colours based on the lights (point/spot/directional).
void SecondPass()
{
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
renderTarget->BindTextures();
pointLightShader->Bind();
glViewport(0, 0, renderTarget->Width(), renderTarget->Height());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (int i = 0; i < lights.size(); ++i)
{
// set uniforms
// shader does lighting calculations
screenQuad->Render();
}
renderTarget->UnbindTextures();
}
I can't imagine there being anything special to do in the shader other than output a vec4 for the final frag colour each time?
This is the main part of the pointLight fragment shader:
out vec4 FragColour;
void main()
{
vec4 posData = texture(positionTexture, TexCoord);
vec4 normData = texture(normalTexture, TexCoord);
vec4 diffData = texture(diffuseTexture, TexCoord);
vec3 pos = vec3(posData);
vec3 norm = vec3(normData);
vec3 diff = vec3(diffData);
float a = posData.w;
float b = normData.w;
float c = diffData.w;
FragColour = vec4(shadePixel(pos, norm, diff), 1.0);
}
But yeah, basically if I use this blend the whole screen is just white.
Well I fixed it, and I feel like an idiot now :)
My opengl was set to
glClearColor(1.0, 1.0, 1.0, 1.0);
which (obviously) is pure white.
I just changed it to black background
glClearColor(0.0, 0.0, 0.0, 1.0);
And now I see everything fine. I guess it was additively blending with the white background, which obviously would be white.
I have a simple OpenGL program and trying to draw an instanced array that stored in a vertex shader. I'm using two follow shaders for rendering:
Vertex Shader:
#version 330 core
uniform mat4 MVP;
const int VertexCount = 4;
const vec2 Position[VertexCount] = vec2[](
vec2(-100.0f, -100.0f),
vec2( -100.0f, 100.0f),
vec2( 100.0f, -100.0f),
vec2(100.0f, 100.0f));
void main()
{
gl_Position = MVP * vec4(Position[gl_VertexID], 0.0, 1.0);
}
Fragment Shader:
#version 330 core
#define FRAG_COLOR 0
layout(location = FRAG_COLOR, index = 0) out vec4 Color;
void main()
{
Color = vec4(0, 1, 0, 1); //let it will be green.
}
After I've compiled and validated these shader I create a vertex array object and draw it like triangle strips:
glUseProgram(programHandle); //handle is checked and valid.
glBindVertexArray(vao);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, 1);
The viewport of drawing is set to the window size like glViewport(0, 0, 800, 600). I pass to MVP a simple orthographic matrix with fallow code:
glUniformMatrix4fv(handle, 1, GL_FALSE, (GLfloat*)&matrix); //handle is checked and valid
where the matrix was initialized:
Matrix::CreateOrthographicOffCenter(-200, 200, -200, 200, 1.0f, -1.0f, &matrix);
...
void Matrix::CreateOrthographicOffCenter(float left, float right, float bottom, float top, float zNearPlane, float zFarPlane, Matrix* matrix)
{
memset(matrix, 0, sizeof(Matrix));
matrix->M11 = 2.0f / (right - left);
matrix->M14 = (-right - left) / (right - left);
matrix->M22 = 2.0f / (top - bottom);
matrix->M24 = (-top - bottom) / (top - bottom);
matrix->M33 = 1.0f / (zFarPlane - zNearPlane);
matrix->M34 = (-zNearPlane) / (zFarPlane - zNearPlane);
matrix->M44 = 1.0f;
}
The problem is I got no triangle strips on my screen. I tried to draw vertex without MVP matrix (gl_Position = vec4(Position[gl_VertexID], 0.0, 1.0)) but also got nothing. How to detect where the problem is?
glBindVertexArray(vao);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, 1);
And what exactly is stored in that VAO? I'm guessing your answer will be "nothing."
If so, then you have run afoul of several problems. If this is a compatibility context (or GL 2.1 or before), then OpenGL does not allow you to render with a VAO that has nothing in it. That is, you can't render with all attributes disabled. You will get a GL_INVALID_OPERATION error.
However, if you are in a core context 3.2 or above, then you can render with a disabled VAO.
Of course, that's just what the OpenGL specification says. What NVIDIA's drivers say is that you can render with a disabled VAO in both core and compatibility. What ATI's drivers say is that you can't render with a disabled VAO in both core and compatibility.
In short, if you want your code to work, bind something. Enable an array and put a buffer object there. It doesn't matter what is in it, since your shader simply won't care. But if you want it to work on different implementations, bind something.
I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.
I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.
You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}
No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.
To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);
This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}
The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.