I have a simple compositing system which is supposed to render different textures and a background texture into an FBO. It also renders some primitives.
Here's an example:
I'm rendering using a simple GLSL shader for the texture and another one for the primitive. Also, I'm waiting for each shader to finish using glFinish after each glDrawArrays call.
So basically:
tex shader (background tex)
tex shader (tex 1)
primitive shader
tex shader (tex 2)
tex shader (tex 3)
When I only do this once, it works. But if I do another render pass directly after the first one finished, some textures just aren't rendered.
The primitive however is always rendered.
This doesn't happen always, but the more textures I draw, the more often this occurs.
Thus, I'm assuming that this is a timing problem.
I tried to troubleshoot for the last two days and I just can't find the reason for this.
I'm 100% sure that the textures are always valid (I downloaded them using glGetTexImage to verify).
Here are my texture shaders.
Vertex shader:
#version 150
uniform mat4 mvp;
in vec2 inPosition;
in vec2 inTexCoord;
out vec2 texCoordV;
void main(void)
{
texCoordV = inTexCoord;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D tex;
in vec2 texCoordV;
out vec4 fragColor;
void main(void)
{
fragColor = texture(tex, texCoordV);
}
And here's my invocation:
NSRect drawDestRect = NSMakeRect(xPos, yPos, str.texSize.width, str.texSize.height);
NLA_VertexRect rect = NLA_VertexRectFromNSRect(drawDestRect);
int texID = 0;
NLA_VertexRect texCoords = NLA_VertexRectFromNSRect(NSMakeRect(0.0f, 0.0f, 1.0f, 1.0f));
NLA_VertexRectFlipY(&texCoords);
[self.texApplyShader.arguments[#"inTexCoord"] setValue:&texCoords forNumberOfVertices:4];
[self.texApplyShader.arguments[#"inPosition"] setValue:&rect forNumberOfVertices:4];
[self.texApplyShader.arguments[#"tex"] setValue:&texID forNumberOfVertices:1];
GetError();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, str.texName);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glFinish();
The setValue:forNumberOfCoordinates: function is an object-based wrapper around OpenGL's parameter application functions. It basically does this:
glBindVertexArray(_vertexArrayObject);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObject);
glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray((GLuint)self.boundLocation);
glVertexAttribPointer((GLuint)self.boundLocation, numVectorElementsForType, GL_FLOAT, GL_FALSE, 0, 0);
Here are two screenshots of what it should look like (taken after first render pass) and what it actually looks like (taken after second render pass):
https://www.dropbox.com/s/0nmquelzo83ekf6/GLRendering_issues_correct.png?dl=0
https://www.dropbox.com/s/7aztfba5mbeq5sj/GLRendering_issues_wrong.png?dl=0
(in this example, the background texture is just black)
The primitive shader is as simple as it gets:
Vertex:
#version 150
uniform mat4 mvp;
uniform vec4 inColor;
in vec2 inPosition;
out vec4 colorV;
void main (void)
{
colorV = inColor;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment:
#version 150
in vec4 colorV;
out vec4 fragColor;
void main(void)
{
fragColor = colorV;
}
Found the issue... I didn't realize that the FBO is drawn to the screen already after the first render pass. This happens on a different thread and wasn't locked properly.
Apparently the context was switched while the compositing took place which explains why it caused different issues randomly depending on when the second thread switched the context.
Related
Been trying to sample from a 1D texture (.png), got a model with the correct texture coordinates and all but I just can't get the texture to show up. The geometry is rendering just black, there must be something I have missunderstood about textures in OpenGL but can't see it.
Any pointers?
C++
// Setup
GLint texCoordAttrib = glGetAttribLocation(batch_shader_program, "vTexCoord");
glVertexAttribPointer(texCoordAttrib, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex<float>), (const void *)offsetof(Vertex<float>, texCoord));
glEnableVertexAttribArray(texCoordAttrib);
// Loading
GLuint load_1d_texture(std::string filepath) {
SDL_Surface *image = IMG_Load(filepath.c_str());
int width = image->w;
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_1D, texture);
glTexImage2D(GL_TEXTURE_1D, 0, GL_RGBA, width, 0, GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);
SDL_FreeSurface(image);
return texture;
}
// Rendering
glUseProgram(batch.gl_program);
glBindTexture(GL_TEXTURE_1D, batch.mesh.texture.gl_texture_reference);
glDraw***
Vertex Shader
#version 330 core
in vec3 position;
in vec4 vColor;
in vec3 normal; // Polygon normal
in vec2 vTexCoord;
// Model
in mat4 model;
out vec4 fColor;
out vec3 fTexcoord;
// View or a.k.a camera matrix
uniform mat4 camera_view;
// Projection or a.k.a perspective matrix
uniform mat4 projection;
void main() {
gl_Position = projection * camera_view * model * vec4(position, 1.0);
fTexcoord = vec3(vTexCoord, 1.0);
}
Fragment Shader
#version 330 core
in vec4 fColor;
out vec4 outColor;
in vec3 fTexcoord; // passthrough shading for interpolated textures
uniform sampler1D sampler;
void main() {
outColor = texture(sampler, fTexcoord.x);
}
glBindTexture(GL_TEXTURE_2D, texture);
glBindTexture(GL_TEXTURE_1D, batch.mesh.texture.gl_texture_reference);
Assuming that these two lines of code are talking about the same OpenGL object, you cannot do that. A texture that uses the 2D texture target is a 2D texture. It is not a 1D texture, nor is it a 2D array texture with one layer or a 3D texture with depth 1. It is a 2D texture.
Once you bind a texture object after generating it, the texture's target is fixed. You can use view textures to create a view of the same storage with different targets, but the original texture object itself is unaffected by this. And you can't create a 1D view of a 2D texture.
You should have gotten a GL_INVALID_OPERATION error when you tried to bind the 2D texture as if it were 1D. You should always check for errors when you run into OpenGL problems.
In the end there was no problem, only a bug in the texture coordinate loading (it took the wrong indices from the vertices..)..
I'm using openGL with GLFW and GLEW. I'm rendering everything using shaders but it seems like the depth buffer doesn't work.
The shaders I'm using for 3D rendering are:
Vertex Shader
#version 410\n
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec2 vt
uniform mat4 view, proj, model;
out vec2 texture_coordinates;
void main() {
texture_coordinates = vt;
gl_Position = proj * view * model* vec4(vertex_position, 1.0);
};
Fragment Shader
#version 410\n
in vec2 texture_coordinates;
uniform sampler2D basic_texture;
out vec4 frag_colour;
void main() {
vec4 texel = texture(basic_texture, vec2(texture_coordinates.x, 1 - texture_coordinates.y));
frag_colour = texel;
};
and I'm also enabling the depth buffer and cull face
glEnable(GL_DEPTH_BUFFER);
glDepthFunc(GL_NEVER);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
This is how it is looking:
The cube is being renderer first, because it is the first group of the Mesh and then the monkey is always renderer on the front, if I change the order of rendering, the cube is going to be in front
Another example, you can see the ear of the monkey being renderer in the front
You're not enabling depth testing. Change glEnable(GL_DEPTH_BUFFER); into glEnable(GL_DEPTH_TEST); This error could have been detected using glGetError().
Like SurvivalMachine said, change GL_DEPTH_BUFFER to GL_DEPTH_TEST. And also make sure that in your main loop you are calling glClear(GL_DEPTH_BUFFER_BIT) before any drawing commands.
I'm working with OpenTK wrapper and C# and trying to use displacement vertex shaders to generate 3D models.
I can run dummie shaders to render cubes and triangles, but now I want to create a 3D grid using texture data. For first attempts I created an image (.png) with different areas using red and black colors.
For reference, here is the texture-loading function:
loadImage(Bitmap image)
{
int texID = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, texID);
System.Drawing.Imaging.BitmapData data = image.LockBits(new System.Drawing.Rectangle(0, 0, image.Width, image.Height),
System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
image.UnlockBits(data);
GL.GenerateMipmap(GenerateMipmapTarget.Texture2D);
return texID;
}
As far as I read in documentation after loading the texture, I bind both arrays (vertex position and texcoords), and call GL.UseProgram. I assume then the texture is binded and loaded, isn't it?
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, objects[0].TextureID);
int loc = GL.GetUniformLocation(shaders[activeShader].ProgramID, "maintexture");
GL.Uniform1(loc, 0);
GL.UniformMatrix4(shaders[activeShader].GetUniform("modelview"), false, ref objects[0].ModelViewProjectionMatrix);
vertex shader:
#version 330
in vec3 vPosition;
in vec2 texcoord;
out vec2 f_texcoord;
uniform mat4 modelview;
uniform sampler2D maintexture;
void
main()
{
vec3 newPos = vPosition;
newPos.y += texture(maintexture, texcoord).r;
gl_Position = modelview * (vec4(newPos, 1.0) );
f_texcoord = texcoord;
}
What I'm trying to achieve is that the red areas in the input texture appear as elevated vertices, and black areas produce vertices at 'ground' level, but I'm getting a perfectly flat grid and I can't understand why.
I'm using OpenGL to draw a large array of 2D points with their colors. Each point (vertex) has also defined it's alpha channel in MX.c array. I'd like to be able to increase or decrease the alpha value of whole array (of every vertex displayed). Is there a clever way to do it, using OpenGL functions? Here's my drawing method:
void PointsMX::drawMX()
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, MX.c);
glVertexPointer(2, GL_DOUBLE, 0, MX.p);
glPushMatrix();
glTranslated(position[X], position[Y], 0.0);
glScaled(scale, scale, 1.0);
glDrawArrays(GL_POINTS, 0, MX.size);
glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
As datenwolf points out in his comments, you can do this pretty simply using a shader, but not using the fixed function pipeline (which is what you're using if you never call glUseProgram().
If you're not using lighting, reproducing the fixed function shaders isn't very hard, and a little googling will help you get up to that point.
The key here is that you want to change something that is normally a vertex attribute (the alpha channel of the color) to a configurable value for the entire drawing operation. In shader terms this means overriding the vertex attribute with a uniform. A uniform is simply a value you pass into an OpenGL program which then has the same value for every vertex or fragment processed (depending on whether you put it into the vertex or fragment shader).
Here's an example of a very basic vertex shader:
#version 330
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
layout(location = 0) in vec3 Position;
layout(location = 3) in vec4 Color;
out vec4 vColor;
void main() {
gl_Position = Projection * ModelView * vec4(Position, 1);
vColor = Color;
}
And a corresponding fragment shader
#version 330
in vec4 vColor;
out vec4 FragColor;
void main()
{
FragColor = vColor;
}
In order to accomplish what you're trying to do, you'd want to change the vertex shader to add an additional uniform representing your alpha override:
#version 330
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
uniform float AlphaOverride = -1.0;
layout(location = 0) in vec3 Position;
layout(location = 3) in vec4 Color;
out vec4 vColor;
void main() {
gl_Position = Projection * ModelView * vec4(Position, 1);
vColor = Color;
if (AlphaOverride > 0.0) {
vColor.a = AlphaOverride;
}
}
If you fail to set the AlphaOverride uniform it will be -1, and will therefore be ignored by the vertex shader. But if you set it to a value between 0 and 1, then it will be applied to the alpha channel of your vertex.
I recently started learning GLSL, and now i have a problem with texturing. I've read all topics about it, i've found the same problem solid color problem, but there was a different problem that caused that. So, i have a simple quadrilateral(ground) and i simply want to render a grass texture on it. Shaders:
Fragment:
#version 330
uniform sampler2D color_texture;
in vec4 color;
out vec2 texCoord0;
void main()
{
gl_FragColor = color+texture(color_texture,texCoord0.st);
}
Vertex:
#version 330
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 a_Vertex;
in vec3 a_Color;
in vec2 a_texCoord0;
out vec4 color;
out vec2 texCoord0;
void main()
{
texCoord0 = a_texCoord0;
gl_Position = (projection_matrix * modelview_matrix) * vec4(a_Vertex, 1.0);
color = vec4(a_Color,0.3);
}
My texture and primitive coords:
static GLint m_primcoords[12]=
{0,0,0,
0,0,100,
100,0,100,
100,0,0};
static GLfloat m_texcoords[8]=
{0.0f,0.0f,
0.0f,1.0f,
1.0f,1.0f,
1.0f,0.0f};
Buffers:
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLint)*12,m_primcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*12,m_colcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*8,m_texcoords,GL_STATIC_DRAW);
and my rendering method:
GLfloat modelviewMatrix[16];
GLfloat projectionMatrix[16];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
cameraMove();
GLuint texturegrass = ploadtexture("grass.BMP");
glBindTexture(GL_TEXTURE_2D, texturegrass);
glGetFloatv(GL_MODELVIEW_MATRIX,modelviewMatrix);
glGetFloatv(GL_PROJECTION_MATRIX,projectionMatrix);
shaderProgram->sendUniform4x4("modelview_matrix",modelviewMatrix);
shaderProgram->sendUniform4x4("projection_matrix",projectionMatrix);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glActiveTexture(GL_TEXTURE0);
shaderProgram->sendUniform("color_texture",0);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glVertexAttribPointer((GLint)1,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glVertexAttribPointer((GLint)2,2,GL_FLOAT,GL_FALSE,0,(GLvoid*)m_texcoords);
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glVertexAttribPointer((GLint)0,3,GL_INT,GL_FALSE,0,0);
glDrawArrays(GL_QUADS,0,12);
So, it looks like the code only reads 4 pixels from my texture(corners) and the output color will be outColor = ctopleft+ctopright+cbotleft+cbotright like this.
I send more code if you want, but i think the problem lies behind these lines.
I tried different coordinates, ordering, everything. I also read almost all topics about problems like this. Im using the beginning ogl game programming 2nd ed., but dont have cd, so i cant check if I'm coding well, cuz only parts of codes are in the book.
There are a couple of problems with your code.
In the fragment shader, you have declared texCoord0 as out, it should be in in the fragment shader and out in the vertex shader, since it is passed from one to the other.
You are binding your texture before you set the "active" texture unit. It defaults to GL_TEXTURE0, but this is still bad practice.