I'm using openGL with GLFW and GLEW. I'm rendering everything using shaders but it seems like the depth buffer doesn't work.
The shaders I'm using for 3D rendering are:
Vertex Shader
#version 410\n
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec2 vt
uniform mat4 view, proj, model;
out vec2 texture_coordinates;
void main() {
texture_coordinates = vt;
gl_Position = proj * view * model* vec4(vertex_position, 1.0);
};
Fragment Shader
#version 410\n
in vec2 texture_coordinates;
uniform sampler2D basic_texture;
out vec4 frag_colour;
void main() {
vec4 texel = texture(basic_texture, vec2(texture_coordinates.x, 1 - texture_coordinates.y));
frag_colour = texel;
};
and I'm also enabling the depth buffer and cull face
glEnable(GL_DEPTH_BUFFER);
glDepthFunc(GL_NEVER);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
This is how it is looking:
The cube is being renderer first, because it is the first group of the Mesh and then the monkey is always renderer on the front, if I change the order of rendering, the cube is going to be in front
Another example, you can see the ear of the monkey being renderer in the front
You're not enabling depth testing. Change glEnable(GL_DEPTH_BUFFER); into glEnable(GL_DEPTH_TEST); This error could have been detected using glGetError().
Like SurvivalMachine said, change GL_DEPTH_BUFFER to GL_DEPTH_TEST. And also make sure that in your main loop you are calling glClear(GL_DEPTH_BUFFER_BIT) before any drawing commands.
Related
Following the tutorial from learnopengl.com about rendering half-transparent windows glasses using blending, I tried to apply that principle to my simple scene (where we can navigate the scene using the mouse) containing:
Cube: 6 faces, each having 2 triangles, constructed using two attributes (position and color) defined in its associated vertex shader and passed to its fragment shader.
Grass: 2D Surface (two triangles) to which a png texture was applied using a sampler2D uniform (the background of the png image is transparent).
Window: A half-transparent 2D surface based on the same shaders (vertex and fragment) as the grass above. Both textures were downloaded from learnopengl.com
The issue I'm facing is that when it comes to the Grass, I can see it through the Window but not the Cube!
My code is structured as follows (I left the rendering of the window to the very last on purpose):
// enable depth test & blending
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
while (true):
glClearColor(background.r, background.g, background.b, background.a);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
cube.draw();
grass.draw();
window.draw();
Edit: I'll share below the vertex and fragment shaders used to draw the two textured surfaces (grass and window):
#version 130
in vec2 position;
in vec2 texture_coord;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec2 texture_coord_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 0.0, 1.0);
texture_coord_vert = texture_coord;
}
#version 130
in vec2 texture_coord_vert;
uniform sampler2D texture2d;
out vec4 color_out;
void main() {
vec4 color = texture(texture2d, texture_coord_vert);
// manage transparency
if (color.a == 0.0)
discard;
color_out = color;
}
And the ones used to render the colored cube:
#version 130
in vec3 position;
in vec3 color;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec3 color_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0);
color_vert = color;
}
#version 130
in vec3 color_vert;
out vec4 color_out;
void main() {
color_out = vec4(color_vert, 1.0);
}
P.S: My shader programs uses GLSL v1.30, because my internal GPU didn't seem to support later versions.
Regarding the piece of code that does the actual drawing, I basically have one instance of a Renderer class for each type of geometry (one shared by both textured surfaces, and one for the cube). This class manages the creation/binding/deletion of VAOs and binding/deletion of VBOs (creation of VBOs made outside the class so I can share vertexes with similar shapes). Its constructor takes as an argument the shader program and the vertex attributes. I'll try to show the relevant piece of code below
Renderer::Renderer(Program program, vector attributes) {
vao.bind();
vbo.bind();
define_attributes(attributes);
vao.unbind();
vbo.unbind();
}
Renderer::draw(Uniforms uniforms) {
vao.bind();
program.use();
set_uniforms(unfiorms);
glDrawArrays(GL_TRIANGLES, 0, n_vertexes);
vao.unbind();
program.unuse();
}
Your blend function function depends on the target's alpha channel (GL_ONE_MINUS_DST_ALPHA):
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
dest = src * src_alpha + dest * (1-dest_alpha)
If the alpha channel of the cube is 0.0, the color of the cube is not mixed with the color of the window.
The traditional alpha blending function depends only on the source alpha channel:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
dest = src * src_alpha + dest * (1-src_alpha)
See also glBlendFunc and Blending
UPDATE: So it turns out this was due to a bug in the C side of things, causing some of the matrix to become malformed. The shaders are all fine. So if adding uniforms causes weird things to happen, my advice would be to use a debugger to check the value of ALL uniforms and make sure that they are all being set correctly.
So I am trying to render depth to a cube map to use as a shadow map, but when I add and use a uniform in the fragment shader everything becomes white as if the shader isn't being used. No warnings or errors are generated when compiling/linking the shader.
The shader program I am using to render the depth map (setting the depth simply to the fragment z position as a test) is as follows:
//vertex shader
#version 430
in layout(location=0) vec4 vertexPositionModel;
uniform mat4 modelToWorldMatrix;
void main() {
gl_Position = modelToWorldMatrix * vertexPositionModel;
}
//geometry shader
#version 430
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
out vec4 fragPositionWorld;
uniform mat4 projectionMatrices[6];
void main() {
for (int face = 0; face < 6; face++) {
gl_Layer = face;
for (int i = 0; i < 3; i++) {
fragPositionWorld = gl_in[i].gl_Position;
gl_Position = projectionMatrices[face] * fragPositionWorld;
EmitVertex();
}
EndPrimitive();
}
}
//Fragment shader
#version 430
in vec4 fragPositionWorld;
void main() {
gl_FragDepth = abs(fragPositionWorld.z);
}
The main shader samples from the cubemap and simply renders the depth as greyscale colour:
vec3 lightDirection = fragPositionWorld - pointLight.position;
float closestDepth = texture(shadowMap, lightDirection).r;
finalColour = vec4(vec3(closestDepth), 1.0);
The scene is a small cube in a larger cubic room, and renders as expected, dark near z = 0 and the cube projected back onto the wall (The depth map is being rendered from the centre of the room):
Good:
[2
I can move the small cube around and the projection projects correctly onto all the sides of the cubemap. All good so far.
The problem is when I add a uniform to the fragment shader, i.e:
#version 430
in vec4 fragPositionWorld;
uniform vec3 lightPos;
void main() {
gl_FragDepth = min(lightPos.y, 0.5);
}
Everything renders as white, same as if the render failed to compile:
Bad:
gDEBugger reports that the uniform is set correctly (0,4,0) but regardless of what that lightPos is, gl_FragDepth should be set to a value less than 0.5 and appear a shade of grey (which is what happens if I set gl_FragDepth = 0.5 directly), so I can only conclude that the fragment shader is not being used for some reason and the default one is being use instead. Unfortunately I have no idea why.
I have a simple compositing system which is supposed to render different textures and a background texture into an FBO. It also renders some primitives.
Here's an example:
I'm rendering using a simple GLSL shader for the texture and another one for the primitive. Also, I'm waiting for each shader to finish using glFinish after each glDrawArrays call.
So basically:
tex shader (background tex)
tex shader (tex 1)
primitive shader
tex shader (tex 2)
tex shader (tex 3)
When I only do this once, it works. But if I do another render pass directly after the first one finished, some textures just aren't rendered.
The primitive however is always rendered.
This doesn't happen always, but the more textures I draw, the more often this occurs.
Thus, I'm assuming that this is a timing problem.
I tried to troubleshoot for the last two days and I just can't find the reason for this.
I'm 100% sure that the textures are always valid (I downloaded them using glGetTexImage to verify).
Here are my texture shaders.
Vertex shader:
#version 150
uniform mat4 mvp;
in vec2 inPosition;
in vec2 inTexCoord;
out vec2 texCoordV;
void main(void)
{
texCoordV = inTexCoord;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D tex;
in vec2 texCoordV;
out vec4 fragColor;
void main(void)
{
fragColor = texture(tex, texCoordV);
}
And here's my invocation:
NSRect drawDestRect = NSMakeRect(xPos, yPos, str.texSize.width, str.texSize.height);
NLA_VertexRect rect = NLA_VertexRectFromNSRect(drawDestRect);
int texID = 0;
NLA_VertexRect texCoords = NLA_VertexRectFromNSRect(NSMakeRect(0.0f, 0.0f, 1.0f, 1.0f));
NLA_VertexRectFlipY(&texCoords);
[self.texApplyShader.arguments[#"inTexCoord"] setValue:&texCoords forNumberOfVertices:4];
[self.texApplyShader.arguments[#"inPosition"] setValue:&rect forNumberOfVertices:4];
[self.texApplyShader.arguments[#"tex"] setValue:&texID forNumberOfVertices:1];
GetError();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, str.texName);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glFinish();
The setValue:forNumberOfCoordinates: function is an object-based wrapper around OpenGL's parameter application functions. It basically does this:
glBindVertexArray(_vertexArrayObject);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObject);
glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray((GLuint)self.boundLocation);
glVertexAttribPointer((GLuint)self.boundLocation, numVectorElementsForType, GL_FLOAT, GL_FALSE, 0, 0);
Here are two screenshots of what it should look like (taken after first render pass) and what it actually looks like (taken after second render pass):
https://www.dropbox.com/s/0nmquelzo83ekf6/GLRendering_issues_correct.png?dl=0
https://www.dropbox.com/s/7aztfba5mbeq5sj/GLRendering_issues_wrong.png?dl=0
(in this example, the background texture is just black)
The primitive shader is as simple as it gets:
Vertex:
#version 150
uniform mat4 mvp;
uniform vec4 inColor;
in vec2 inPosition;
out vec4 colorV;
void main (void)
{
colorV = inColor;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment:
#version 150
in vec4 colorV;
out vec4 fragColor;
void main(void)
{
fragColor = colorV;
}
Found the issue... I didn't realize that the FBO is drawn to the screen already after the first render pass. This happens on a different thread and wasn't locked properly.
Apparently the context was switched while the compositing took place which explains why it caused different issues randomly depending on when the second thread switched the context.
I recently started learning GLSL, and now i have a problem with texturing. I've read all topics about it, i've found the same problem solid color problem, but there was a different problem that caused that. So, i have a simple quadrilateral(ground) and i simply want to render a grass texture on it. Shaders:
Fragment:
#version 330
uniform sampler2D color_texture;
in vec4 color;
out vec2 texCoord0;
void main()
{
gl_FragColor = color+texture(color_texture,texCoord0.st);
}
Vertex:
#version 330
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 a_Vertex;
in vec3 a_Color;
in vec2 a_texCoord0;
out vec4 color;
out vec2 texCoord0;
void main()
{
texCoord0 = a_texCoord0;
gl_Position = (projection_matrix * modelview_matrix) * vec4(a_Vertex, 1.0);
color = vec4(a_Color,0.3);
}
My texture and primitive coords:
static GLint m_primcoords[12]=
{0,0,0,
0,0,100,
100,0,100,
100,0,0};
static GLfloat m_texcoords[8]=
{0.0f,0.0f,
0.0f,1.0f,
1.0f,1.0f,
1.0f,0.0f};
Buffers:
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLint)*12,m_primcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*12,m_colcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*8,m_texcoords,GL_STATIC_DRAW);
and my rendering method:
GLfloat modelviewMatrix[16];
GLfloat projectionMatrix[16];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
cameraMove();
GLuint texturegrass = ploadtexture("grass.BMP");
glBindTexture(GL_TEXTURE_2D, texturegrass);
glGetFloatv(GL_MODELVIEW_MATRIX,modelviewMatrix);
glGetFloatv(GL_PROJECTION_MATRIX,projectionMatrix);
shaderProgram->sendUniform4x4("modelview_matrix",modelviewMatrix);
shaderProgram->sendUniform4x4("projection_matrix",projectionMatrix);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glActiveTexture(GL_TEXTURE0);
shaderProgram->sendUniform("color_texture",0);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glVertexAttribPointer((GLint)1,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glVertexAttribPointer((GLint)2,2,GL_FLOAT,GL_FALSE,0,(GLvoid*)m_texcoords);
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glVertexAttribPointer((GLint)0,3,GL_INT,GL_FALSE,0,0);
glDrawArrays(GL_QUADS,0,12);
So, it looks like the code only reads 4 pixels from my texture(corners) and the output color will be outColor = ctopleft+ctopright+cbotleft+cbotright like this.
I send more code if you want, but i think the problem lies behind these lines.
I tried different coordinates, ordering, everything. I also read almost all topics about problems like this. Im using the beginning ogl game programming 2nd ed., but dont have cd, so i cant check if I'm coding well, cuz only parts of codes are in the book.
There are a couple of problems with your code.
In the fragment shader, you have declared texCoord0 as out, it should be in in the fragment shader and out in the vertex shader, since it is passed from one to the other.
You are binding your texture before you set the "active" texture unit. It defaults to GL_TEXTURE0, but this is still bad practice.
Im making a 2D side scroller game and I am currently implementing lights. The lights are just a light gradient texture rendered on top of the terrain multiplied to make it brighten up the area. However, I dont know how to nor understand how to do Ambient lighting. The following picture sums up what I have and the bottom part is what I want.
I am open to answers regarding shaders for I know how to use them.
I ended up creating an FBO texture the size of the screen, clearing it with the color of the ambience and drawing in all nearby lights. Then, I passed it through a shader I made which takes in 2 textures for uniforms. The texture to draw and the light FBO itself. The shader multiplies the textures being drawn with the FBO and it came out nicely.
ambience.frag
uniform sampler2D texture1;
uniform sampler2D texture2;
varying vec2 texCoord;
void main( void ) {
vec4 color1 = vec4(texture2D(texture1, gl_TexCoord[0].st));
vec4 color2 = vec4(texture2D(texture2, texCoord));
gl_FragColor = color1*vec4(color2.r,color2.g,color2.b,1.0);
}
ambience.vs
varying vec2 texCoord;
uniform vec2 screen;
uniform vec2 camera;
void main(){
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
vec2 temp = vec2(gl_Vertex.x,gl_Vertex.y)-camera;
texCoord = temp/screen;
}