I have a few objects on the scene and even if I specify that the object A have y= 10 (the highest object), from the TOP camera I can see the bottom objects through the object A. Here is an image from my scene.
And only today I found an interesting property that draw order of models matters, I may be wrong. Here is another image where I change the draw order of "ship1", attention: "ship1" is way bellow my scene, if I do ship1.draw(); first, the ship disappears (correct), but if I do ship1.draw(); in last, he appears on top (incorrect).
Video: Opengl Depth Problem video
Q1) Does the draw order always matter?
Q2) How do I fix this, should I change draw order every time I change camera position?
Edit: I also compared my class of Perspective Projection with the glm library, just to be sure that it´s not the problem with my projection Matrix. Everything is correct.
Edit1: I have my project on git: Arkanoid git repository (windows, project is ready to run at any computer with VS installed)
Edit2: I don´t use normals or texture. Just vertices and indices.
Edit3: Is there a problem, if every object on the scene uses(share) vertices from the same file ?
Edit4: I also changed my Perspective Projection values. I had near plane at 0.0f, now I have near=20.0f and far=500.0f, angle=60º. But nothing changes, view does but the depth not. =/
Edit5: Here is my Vertex and Fragment shaders.
Edit6: contact me any time, I am here all day, so ask me anything. At the moment I am rewriting all project from zero. I have two cubes which renders well, one in front of another. Already added mine class for: camera, projections, handler for shaders. Moving to Class which creates and draws objects.
// Vertex shader
in vec4 in_Position;
out vec4 color;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
void main(void)
{
color = in_Position;
gl_Position = Projection * View * Model * in_Position;
}
// Fragment shader
#version 330 core
in vec4 color;
out vec4 out_Color;
void main(void)
{
out_Color = color;
}
Some code:
void setupOpenGL() {
std::cerr << "CONTEXT: OpenGL v" << glGetString(GL_VERSION) << std::endl;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(GL_TRUE);
glDepthRange(0.0, 1.0);
glClearDepth(1.0);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
}
void display()
{
++FrameCount;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderScene();
glutSwapBuffers();
}
void renderScene()
{
wallNorth.draw(shader);
obstacle1.draw(shader);
wallEast.draw(shader);
wallWest.draw(shader);
ship1.draw(shader);
plane.draw(shader);
}
I have cloned the repository you have linked to see if the issue was located somewhere else. In your most recent version the Object3D::draw function looks like this:
glBindVertexArray(this->vaoID);
glUseProgram(shader.getProgramID());
glUniformMatrix4fv(this->currentshader.getUniformID_Model(), 1, GL_TRUE, this->currentMatrix.getMatrix()); // PPmat é matriz identidade
glDrawElements(GL_TRIANGLES, 40, GL_UNSIGNED_INT, (GLvoid*)0);
glBindVertexArray(0);
glUseProgram(0);
glClear( GL_DEPTH_BUFFER_BIT); <<< clears the current depth buffer.
The last line clears the depth buffer after each object that is drawn, meaning that the next object drawn is not occluded properly. You should only clear the depth buffer once every frame.
I'm trying to port an old SDL game I wrote some time ago to 3.3+ OpenGL and I'm having a few issues getting a proper orthogonal matrix and it's corresponding view.
glm::mat4 proj = glm::ortho( 0.0f, static_cast<float>(win_W), static_cast<float>(win_H), 0.0f,-5.0f, 5.0f);
If I simply use this projection and try to render say a quad inside the boundaries of win_W and win_H, it works. However, if instead I try to simulate a camera using:
glm::mat4 view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 1.0f),//cam pos
glm::vec3(0.0f, 0.0f, 0.0f),//looking at
glm::vec3(0.0f, 0.0f, 1.0f)//floored
);
I get nothing. Even when positioning the quad in the center, same if instead I center the view:
glm::mat4 view = glm::lookAt(
glm::vec3(static_cast<float>(win_W), static_cast<float>(winH), 1.0f),//cam pos
glm::vec3(static_cast<float>(win_W), static_cast<float>(winH), 0.0f),//looking at
glm::vec3(0.0f, 0.0f, 1.0f)//floored
);
Considering that SDL usually subtracts the camera values from the vertices in the game to simulate a view, is it simply better to replace my MVP operation in the shader like this:
#version 150
in vec2 position;
in vec3 color;
out vec3 Color;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform mat4 camera;
void main()
{
Color = color;
//faulty view
//gl_Position = projection *view * model * vec4(position, 0.0, 1.0);
//new SDL style camera
mat4 newPosition = projection * model * vec4(position, 0.0, 1.0);
gl_Position = newPosition - camera;
}
Unfortunately the source I'm learning from (ArcSynthesis Book) doesn't cover a view that involves a coordinate system other than the default NDC. And for some reason even today most discussion about opengl is about deprecated versions.
So in short, whats the best way of setting up the view to an orthogonal projection matrix for simple 2D rendering in modern opengl?
Your lookup call actually does not make sense at all, since your up vector (0,0,1) is colinear to the viewing direction (0,0,-1). This will not produce a useable matrix.
You probably want (0,1,0) as up vector. However, even if you change that, you might be surprised about the result. In combination with the projection matrix you set up, the point you specified as the lookat target will not appear in the center, but at the corner of the screen. Your projection does not map (0,0) to the center, which one typically assumes when using some LookAt function.
I agree to Colonel Thirty Two's comment: Don't use LookAt in this scenario, unless you have a very good reason to.
I currently have 5 models displayed in a screen and what I'm trying to do. The following is my vertex shader for translating the models individually so that I can get them to move in different directions:
#version 330
layout (location = 0) Position;
uniform mat4 MVP;
uniform vec3 Translation;
uniform mat4 Rotate;
void main()
{
gl_Position = MVP * * Rotate * vec4(Position + Translation, 1.0); // Correct?
}
And to position/move my models individually within the render loop:
//MODEL ONE
glUniform3f(loc, 0.0f, 4.0f, 0.0f); // loc is "Translate"
glUniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(rotationMatrix)); // loc is "Rotate"
_model1.render();
Also I do have a glm::mat4 rotateMatrix() that returns a rotation. but when I multiply it with the other matrices within the render loop, the whole scene (minus the camera) rotates to the set angle.
UPDATE
How would I be able to apply my rotation to the models independently of the world on their own axis? The problem now is that the model rotates, but from 0,0,0 of the world and not it's own position.
There's are a couple of syntax error in your vertex shader:
No type for the Position variable. Looks from the context like it should be a vec3.
Two * signs after MVP.
I assume that was those were just an accident while copying the code, and you actually have a vertex shader that compiles.
To apply the rotation described by the Rotate matrix before the translation from the Translation vector, you should be able to simply change the order in the vertex shader:
vec4 rotatedVec = Rotate * vec4(Position, 1.0);
gl_Position = MVP * vec4(rotatedVec.xyz + Translation, 1.0);
The whole thing would looks simpler if you defined Rotate as a 3x3 matrix, which is sufficient for a rotation.
I found this code on internet http://rioki.org/2013/03/07/glsl-skybox.html for cubemap enviromental texture(actually rendering a skybox). But i do not understand it why it works.
void main()
{
mat4 r = gl_ModelViewMatrix;
r[3][0] = 0.0;
r[3][1] = 0.0;
r[3][2] = 0.0;
vec4 v = inverse(r) * inverse(gl_ProjectionMatrix) * gl_Vertex;
gl_TexCoord[0] = v;
gl_Position = gl_Vertex;
}
So gl_Vertex is in world coordinates but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
this is the code I use to draw my skybox
void SkyBoxDraw(void)
{
GLfloat SkyRad = 1.0f;
glUseProgramObjectARB(glsl_program_skybox);
glDepthMask(0);
glDisable(GL_DEPTH_TEST);
// Cull backs of polygons
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_TEXTURE_CUBE_MAP);
glBegin(GL_QUADS);
//////////////////////////////////////////////
// Negative X
glTexCoord3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-SkyRad, -SkyRad, SkyRad);
glTexCoord3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-SkyRad, -SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-SkyRad, SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-SkyRad, SkyRad, SkyRad);
......
......
glEnd();
glDepthMask(1);
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_CUBE_MAP);
glUseProgramObjectARB(0);
}
So gl_Vertex is in world coordinates ...
No no no, gl_Vertex is in object/model-space unless I see some code elsewhere (e.g. how your vertex position is calculated in the actual non-shader portion of your program) that indicates otherwise :) In OpenGL we skip from object-space to eye/view/camera-space when we multiply by the combined Model*View matrix. As you can see, there are lots of names for the same coordinate spaces, but object-space definitely is not a synonym for world-space. Setting r3 to < 0, 0, 0, 1 > basically re-positions the camera's origin without affecting direction, which is useful when all you want to know is the direction for a cubemap lookup.
That is, in a nutshell, what you want when using cubemaps. Just a simple direction vector. The fact that textureCube (...) takes a 3D vector instead of 4D is an immediate hint that it is looking for a direction instead of position. Position vectors have a 4th component, direction do not. So, technically if you wanted to port this shader to modern OpenGL you would probably use an out vec3 and swizzle .xyz off of v, since v.w is unnecessary.
... but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
You are basically undoing projection when you multiply by the inverse of these matrices. The only way this shader makes sense is if the coordinates you are passing for your vertices are defined in clip-space. So instead of going from object-space through the GL pipeline and winding up in screen-space at the end you want the reverse of that, only since the viewport is not involved in your shader we cannot be dealing with screen-space. A little bit more information on how your vertex positions are calculated should clear this up.
I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.
I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.
You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}
No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.
To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);
This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}
The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.