Textures from one VAO being applied to another VAO - c++
I am currently trying to render text in OpenGL using bitmap files. When it's by itself, the font looks as expected.
Exhibit A:
When adding a separate texture (a picture) in a separate VAO OR more text in the same VAO, "This engine can render text!" still looks the same.
However, when adding both the texture in a separate VAO AND more text in the same VAO, the texture of "This engine can render text!" gets modified.
Exhibit B:
What's really strange to me is that the textures seem to be blended, and that it only affects a few vertices rather than the entire VBO.
Is this a problem of OpenGL/poor drivers, or is it something else? I double checked the vertices and the 'his' aren't being rendered with the picture texture active. I am using OSX which is notorious for poor OpenGL support, if it might help.
My rendering loop:
//scene is just a class that bundles all the necessary information for rendering.
//when rendering text, it is all batched inside of one scene,
//so independent textures of text characters should be impossible
glUseProgram(prgmid);
for(auto& it : scene->getTextures() )
{
//load textures
const Texture::Data* data = static_cast<const Texture::Data*>(it->getData() );
glActiveTexture(GL_TEXTURE0 + it->getID() );
glBindTexture(GL_TEXTURE_2D, it->getID() );
glUniform1i(glGetUniformLocation(prgmid, data->name), it->getID() );
}
for(auto& it : scene->getUniforms() )
{
processUniforms(scene, it, prgmid);
}
glBindVertexArray(scene->getMesh()->getVAO() );
glDrawElements(GL_TRIANGLES, scene->getMesh()->getDrawCount(), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
Shaders of the text:
//Vertex
#version 330 core
layout (location = 0) in vec4 pos;
layout (location = 1) in vec2 fontTexCoord;
out vec2 fontTexCoords;
uniform mat4 __projection;
void main()
{
fontTexCoords = vec2(fontTexCoord.x, fontTexCoord.y);
gl_Position = __projection * pos;
}
//Frag
#version 330 core
in vec2 fontTexCoords;
out vec4 color;
uniform sampler2D fontbmp;
void main()
{
color = texture(fontbmp, fontTexCoords);
if(color.rgb == vec3(0.0, 0.0, 0.0) ) discard;
}
Shaders of the picture:
//vert
#version 330 core
layout (location = 0) in vec4 pos;
layout (location = 1) in vec2 texCoord;
out vec2 TexCoords;
uniform mat4 __projection;
uniform float __spriteFrameRatio;
uniform float __spriteFramePos;
uniform float __flipXMult;
uniform float __flipYMult;
void main()
{
TexCoords = vec2(((texCoord.x + __spriteFramePos) * __spriteFrameRatio) * __flipXMult, texCoord.y * __flipYMult);
gl_Position = __projection * pos;
}
//frag
#version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D __image;
uniform vec4 __spriteColor;
uniform bool __is_texture;
void main()
{
if(__is_texture)
{
color = __spriteColor * texture(__image, TexCoords);
}
else
{
color = __spriteColor;
}
}
EDIT:
I believe the code that is causing the problem has to do with generating the buffers. It's called everytime when rendered for each scene (VAO, VBO, EBO, texture) object.
if(!REALLOCATE_BUFFER && !ATTRIBUTE_ADDED) return;
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
if(REALLOCATE_BUFFER)
{
size_t vsize = _vert.size() * sizeof(decltype(_vert)::value_type);
size_t isize = _indc.size() * sizeof(decltype(_indc)::value_type);
if(_prevsize != vsize)
{
_prevsize = vsize;
glBufferData(GL_ARRAY_BUFFER, vsize, &_vert[0], _mode);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, isize, &_indc[0], _mode);
}
else
{
glBufferSubData(GL_ARRAY_BUFFER, 0, vsize, &_vert[0]);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, isize, &_indc[0]);
}
}
if(ATTRIBUTE_ADDED)
{
for(auto& itt : _attrib)
{
glVertexAttribPointer(itt.index, itt.size, GL_FLOAT, itt.normalized, _currstride * sizeof(GLfloat), (GLvoid*)(itt.pointer * sizeof(GLfloat) ) );
glEnableVertexAttribArray(itt.index);
}
}
glBindVertexArray(0);
When we comment out glBufferSubData so that glBufferData is always called, the problem area flickers and iterates through all textures, including the ones in other VAOs.
EDIT 2:
For some reason, everything works as expected when the text is rendered with a different mode than the picture, say GL_STREAM_DRAW and GL_DYNAMIC_DRAW, for instance. How can this be?
So the thing I messed up was that getDrawCount() was returning the number of VERTICES rather than the number of indices. Astonishingly, this didn't cause OpenGL to throw any errors and fixing it solved the flickering.
Related
Simple GLSL render chain doesn't draw reliably
I have a simple compositing system which is supposed to render different textures and a background texture into an FBO. It also renders some primitives. Here's an example: I'm rendering using a simple GLSL shader for the texture and another one for the primitive. Also, I'm waiting for each shader to finish using glFinish after each glDrawArrays call. So basically: tex shader (background tex) tex shader (tex 1) primitive shader tex shader (tex 2) tex shader (tex 3) When I only do this once, it works. But if I do another render pass directly after the first one finished, some textures just aren't rendered. The primitive however is always rendered. This doesn't happen always, but the more textures I draw, the more often this occurs. Thus, I'm assuming that this is a timing problem. I tried to troubleshoot for the last two days and I just can't find the reason for this. I'm 100% sure that the textures are always valid (I downloaded them using glGetTexImage to verify). Here are my texture shaders. Vertex shader: #version 150 uniform mat4 mvp; in vec2 inPosition; in vec2 inTexCoord; out vec2 texCoordV; void main(void) { texCoordV = inTexCoord; gl_Position = mvp * vec4(inPosition, 0.0, 1.0); } Fragment shader: #version 150 uniform sampler2D tex; in vec2 texCoordV; out vec4 fragColor; void main(void) { fragColor = texture(tex, texCoordV); } And here's my invocation: NSRect drawDestRect = NSMakeRect(xPos, yPos, str.texSize.width, str.texSize.height); NLA_VertexRect rect = NLA_VertexRectFromNSRect(drawDestRect); int texID = 0; NLA_VertexRect texCoords = NLA_VertexRectFromNSRect(NSMakeRect(0.0f, 0.0f, 1.0f, 1.0f)); NLA_VertexRectFlipY(&texCoords); [self.texApplyShader.arguments[#"inTexCoord"] setValue:&texCoords forNumberOfVertices:4]; [self.texApplyShader.arguments[#"inPosition"] setValue:&rect forNumberOfVertices:4]; [self.texApplyShader.arguments[#"tex"] setValue:&texID forNumberOfVertices:1]; GetError(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, str.texName); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); glFinish(); The setValue:forNumberOfCoordinates: function is an object-based wrapper around OpenGL's parameter application functions. It basically does this: glBindVertexArray(_vertexArrayObject); glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_DYNAMIC_DRAW); glEnableVertexAttribArray((GLuint)self.boundLocation); glVertexAttribPointer((GLuint)self.boundLocation, numVectorElementsForType, GL_FLOAT, GL_FALSE, 0, 0); Here are two screenshots of what it should look like (taken after first render pass) and what it actually looks like (taken after second render pass): https://www.dropbox.com/s/0nmquelzo83ekf6/GLRendering_issues_correct.png?dl=0 https://www.dropbox.com/s/7aztfba5mbeq5sj/GLRendering_issues_wrong.png?dl=0 (in this example, the background texture is just black) The primitive shader is as simple as it gets: Vertex: #version 150 uniform mat4 mvp; uniform vec4 inColor; in vec2 inPosition; out vec4 colorV; void main (void) { colorV = inColor; gl_Position = mvp * vec4(inPosition, 0.0, 1.0); } Fragment: #version 150 in vec4 colorV; out vec4 fragColor; void main(void) { fragColor = colorV; }
Found the issue... I didn't realize that the FBO is drawn to the screen already after the first render pass. This happens on a different thread and wasn't locked properly. Apparently the context was switched while the compositing took place which explains why it caused different issues randomly depending on when the second thread switched the context.
OpenGL error thrown by fragment shader
I'm writing a 2D game in OpenTK, using OpenGL 4.4. Using colour and texture UV coordinates and a matrix I can succesfully draw textures between vertices with vertex shader: public const string vertexShaderDefaultSrc = #" #version 330 uniform mat4 MVPMatrix; layout (location = 0) in vec2 Position; layout (location = 1) in vec2 Texture; layout (location = 2) in vec4 Colour; out vec2 InTexture; out vec4 OutColour; void main() { gl_Position = MVPMatrix * vec4(Position, 0, 1); InTexture = Texture; OutColour = Colour; }"; and fragment shader: public const string fragmentShaderDefaultSrc = #" #version 330 uniform sampler2D Sampler; in vec2 InTexture; in vec4 OutColour; out vec4 OutFragColor; void main() { OutFragColor = texture(Sampler, InTexture) * OutColour; //Alpha test if(OutFragColor.a <= 0) discard; } "; BUT if I want to draw just a solid colour rather than a texture, I use this shader (with the same vertices, passing UV coords that won't be used): public const string fragmentShaderSolidColourSrc = #" #version 330 uniform sampler2D Sampler; in vec2 InTexture; in vec4 OutColour; out vec4 OutFragColor; void main() { OutFragColor = OutColour; //Alpha test if(OutFragColor.a <= 0) discard; } "; Now this works beautifully, but OpenGL reports an error - GL_INVALID_VALUE. It draws fine and everything seems to work, but ideally I would like OpenGL to be error free in that situation, so I can catch real errors. I would appreciate any help, and can share more detail of how the shader is compiled or used if that is helpful - what I don't understand is how the default shader can work but the solid colour doesn't. I have tracked down the exact source of the errors in my render call (shader builds with no problems) GL.EnableVertexAttribArray(shader.LocationPosition); GL.VertexAttribPointer(shader.LocationPosition, 2, VertexAttribPointerType.Float, false, Stride, 0); //-----everything up to here is fine //this line throws an error GL.EnableVertexAttribArray(shader.LocationTexture); //as does this line GL.VertexAttribPointer(shader.LocationTexture, 2, VertexAttribPointerType.Float, false, Stride, 8); //this is all ok GL.EnableVertexAttribArray(shader.LocationColour); GL.VertexAttribPointer(shader.LocationColour, 4, VertexAttribPointerType.UnsignedByte, true, Stride, 16); //ok GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBuffer); GL.DrawArrays(DrawType, 0, Vertices.Length); //ok GL.DisableVertexAttribArray(shader.LocationPosition); //this line throws error GL.DisableVertexAttribArray(shader.LocationTexture); //this is ok GL.DisableVertexAttribArray(shader.LocationColour);
It appears to me after some tests (would be nice to have this verified) that if a variable such as the texture coordinates are not used by the shader the compiler gets rid of it, so a call to get it's location returns -1. Simply checking if locationTexture was -1 here and then not binding locationTexture etc if so resolved my issues.
OpenGL horizontal pixel pairs drawn swapped
I have problem that is extremely similar to the one described in OpenGL pixels drawn with each horizontal pair swapped. The main difference is that I'm getting this disortion even when I feed the texture one-byte red-only values. EDIT: By closer inspection of normal textures, I have discovered that this problem manifests when rendering any 2D texture. I tried rotating the resulting texture by swapping the texture coordinates. The resulting picture still have swapped visual horizontal pixels - so I'm assuming that the data in the texture is good, and the disortion occurs when rendering the texture. Here are the relevant parts of the code: C++: struct coord_t { float x; float y; } GLint loc = glGetAttributeLocation(program, "coord"); if (loc != -1) { glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE, sizeof(coord_t), static_cast<void *>(offsetof(coord_t, x))); glEnableVertexAttribArray(loc); } loc = glGetAttributeLocation(program, "tex_coord"); if (loc != -1) { glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE, sizeof(coord_t), static_cast<void *>((void*)(4*sizeof(coord_t)+offsetof(coord_t, x))); glEnableVertexAttribArray(loc); } // ... Texture binding to GL_TEXTURE_2D ... coord_t pos[] = {coord_t{-1.f,-1.f}, coord_t{1.f,-1.f} coord_t{-1.f,1.f}, coord_t{1.f,1.f} }; glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(pos), pos); // position glBuffefSubData(GL_ARRAY_BUFFER, sizeof(pos), sizeof(pos), pos); // texture coordinates glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); Corresponding vertex shader: #version 110 attribute vec2 coord; attribute vec2 tex_coord; varying vec2 tex_out; void main(void) { gl_Position = vec4(coord.xy, 0.0, 1.0); tex_out = tex_coord; } Corresponding fragment shader: #version 110 uniform sampler2D my_texture; varying vec2 tex_out; void main(void) { gl_FragColor = texture(my_texture, tex_out); }
After extensive code investigation, I managed to find the culprit. I was setting the blending function incorrectly, using GL_SRC1_ALPHA and GL_ONE_MINUS_SRC1_ALPHA instead of GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA.
How do I get textures to work in OpenGL?
I'm using the tutorials on http://arcsynthesis.org/gltut/ to learn OpenGL, it's required, I have to use it. Mostly I want to apply the textures from Tutorial 15 onto objects in tutorial 7 (world with UBO). For now it seemed like the textures only work when mipmaps are turned on. This comes with a downside: The only mipmap used is the one with an index of zero, and that's the 1 colored 1x1 pixel one. I tried setting the minimum level of a mipmap higher or turning off mipmaps entirely, but even that doesn't fix thing, because then everything turns pitch black. Now I'll list the most important parts of my program EDIT: I guess I'll add more details... The vertex shader has something like this: #version 330 layout(location = 0) in vec4 position; layout(location = 1) in vec4 color; layout(location = 2) in vec3 normal; //Added these later layout(location = 5) in vec2 texCoord; out vec2 colorCoord; smooth out vec4 interpColor; out vec3 vertexNormal; out vec3 modelSpacePosition; out vec3 cameraSpacePosition; uniform mat4 worldToCameraMatrix; uniform mat4 modelToWorldMatrix; uniform mat3 normalModelToCameraMatrix; uniform vec3 dirToLight; uniform vec4 lightIntensity; uniform vec4 ambientIntensity; uniform vec4 baseColor; uniform mat4 cameraToClipMatrix; void main() { vertexNormal = normal; vec3 normCamSpace = normalize(normalModelToCameraMatrix * vertexNormal); cameraSpacePosition = normCamSpace; float cosAngIncidence = dot(normCamSpace, dirToLight); cosAngIncidence = clamp(cosAngIncidence, 0, 1); modelSpacePosition.x = position.x; modelSpacePosition.y = position.y; modelSpacePosition.z = position.z; vec4 temp = modelToWorldMatrix * position; temp = worldToCameraMatrix * temp; gl_Position = cameraToClipMatrix * temp; interpColor = ((lightIntensity * cosAngIncidence) + (ambientIntensity)) * baseColor; colorCoord= texCoord ; } The fragment shader like this: #version 330 in vec3 vertexNormal; in vec3 modelSpacePosition; smooth in vec4 interpColor; uniform vec3 modelSpaceLightPos; uniform vec4 lightIntensity2; uniform vec4 ambientIntensity2; out vec4 outputColor; //Added later in vec2 colorCoord; uniform sampler2D colorTexture; void main() { vec3 lightDir2 = normalize(modelSpacePosition - modelSpaceLightPos); float cosAngIncidence2 = dot(normalize(vertexNormal), lightDir2); cosAngIncidence2 = clamp(cosAngIncidence2, 0, 1); float light2DistanceSqr = dot(modelSpacePosition - modelSpaceLightPos, modelSpacePosition - modelSpaceLightPos); //added vec4 texture2 = texture(colorTexture, colorCoord); outputColor = ((ambientIntensity2 + (interpColor*2))/4) + ((((interpColor) * lightIntensity2/200 * cosAngIncidence2) + (ambientIntensity2* interpColor )) /( ( sqrt(light2DistanceSqr) + light2DistanceSqr)/200 )); //No outputColor for texture testing outputColor = texture2 ; } } Those were both shaders. And here are the parts added to the .cpp: #include <glimg/glimg.h> #include "../framework/directories.h" [...] const int g_colorTexUnit = 0; GLuint g_checkerTexture = 0; And here's the loader for the texture: void LoadCheckerTexture() { try { std::string filename(LOCAL_FILE_DIR); filename += "checker.dds"; std::auto_ptr<glimg::ImageSet> pImageSet(glimg::loaders::dds::LoadFromFile(filename.c_str())); glGenTextures(1, &g_checkerTexture); glBindTexture(GL_TEXTURE_2D, g_checkerTexture); glimg::SingleImage image = pImageSet->GetImage(0, 0, 0); glimg::Dimensions dims = image.GetDimensions(); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, dims.width, dims.height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, image.GetImageData()); glBindTexture(GL_TEXTURE_2D, 0); } catch(std::exception &e) { printf("%s\n", e.what()); throw; } } Naturally I've got this in void init(): LoadCheckerTexture(); And then when rendering the object: glActiveTexture(GL_TEXTURE0 + g_colorTexUnit); glBindTexture(GL_TEXTURE_2D,g_checkerTexture); g_pLeftMesh->Render(); glBindSampler(g_colorTexUnit, 0); glBindTexture(GL_TEXTURE_2D, 0); With all of this, I get put pitch black for everything, however when I change the outputColor equation into "texture + outputColor;", everything looks normal. I have no idea what I'm doing wrong here. A friend tried to help me, we removed some unnecessairy stuff, but we got nothing running.
Ok guys, I've worked on this whole thing, and did manage to somehow get it running. First off I had to add samplers: GLuint g_samplers; //Add Later void CreateSamplers() { glGenSamplers(1, &g_samplers); glSamplerParameteri(g_samplers, GL_TEXTURE_WRAP_S, GL_REPEAT); glSamplerParameteri(g_samplers, GL_TEXTURE_WRAP_T, GL_REPEAT); //Linear mipmap Nearest glSamplerParameteri(g_samplers, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glSamplerParameteri(g_samplers, GL_TEXTURE_MIN_FILTER, GL_NEAREST); } I also added this to the file thing: glimg::OpenGLPixelTransferParams xfer = glimg::GetUploadFormatType(pImageSet->GetFormat(), 0); glimg::SingleImage image = pImageSet->GetImage(0, 0, 0); glimg::Dimensions dims = image.GetDimensions(); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, dims.width, dims.height, 0, xfer.format, xfer.type, image.GetImageData()); The xfer variable does get the format and type adjusted to the dds. Also the render code got turned into this: //Added necessary glActiveTexture(GL_TEXTURE0 + g_colorTexUnit); glBindTexture(GL_TEXTURE_2D,g_checkerTexture); glBindSampler(g_colorTexUnit, g_samplers); g_pLeftMesh->Render(); glBindSampler(g_colorTexUnit, 0); glBindTexture(GL_TEXTURE_2D, 0); And of course at the end of init() I needed to add the CreateSamplers thing: //Added this later LoadCheckerTexture(); CreateSamplers(); I'm sorry for all the trouble with all this, but guess OpenGL really is just this confusing and it was just dumb luck that I got it right. Just posting this so that people know
Your fail to add textures may be caused by: Have you add texture coordinates to objects? (this is the most probable cause, because you are adding textures to non textured tutorial), add textures to VAO. Did you add uniform textureunit (Sampler2D)? (it must be uniform, else texturing will not work properly) Is your texture loaded,binded,enabled (GL_TEXTURE_2D) ? Is your active texture unit - 0? if not change layout/multitexture coords or set active texture 0 This two codes are simple texturing shaders (texture unit 0) no special things (like light,blend,bump,...): tm_l2g is transformation local obj space -> world space (Modelview) tm_g2s is transformation world space -> screen space (Projection) pos are vertex coordinates txt are texture coordinates col are colors Do not forget to change uniform names and layout locations to yours. Vertex: //------------------------------------------------------------------ #version 420 core //------------------------------------------------------------------ uniform mat4x4 tm_l2g; uniform mat4x4 tm_g2s; layout(location=0) in vec3 pos; layout(location=1) in vec4 col; layout(location=2) in vec2 txr; out smooth vec4 pixel_col; out smooth vec2 pixel_txr; //------------------------------------------------------------------ void main(void) { vec4 p; p.xyz=pos; p.w=1.0; p=tm_l2g*p; p=tm_g2s*p; gl_Position=p; pixel_col=col; pixel_txr=txr; } //------------------------------------------------------------------ fragment: //------------------------------------------------------------------ #version 420 core //------------------------------------------------------------------ in smooth vec4 pixel_col; in smooth vec2 pixel_txr; uniform sampler2D txr_texture0; out layout(location=0) vec4 frag_col; //------------------------------------------------------------------ void main(void) { vec4 col; col=texture(txr_texture0,pixel_txr.st); frag_col=col*pixel_col; } //------------------------------------------------------------------ [edit1] CPU old style OpenGL render code (initializations are not included its only render code they can be found here) //------------------------------------------------------------------ // set modelview,projection,textures,bind GLSL programs... GLfloat a=10.0,z=0.0; glColor3f(1.0,1.0,1.0); glBegin(GL_QUADS); // textured quad glTexCoord2f(0.0,0.0); glVertex3f(-a,-a,z); glTexCoord2f(0.0,1.0); glVertex3f(-a,+a,z); glTexCoord2f(1.0,1.0); glVertex3f(+a,+a,z); glTexCoord2f(1.0,0.0); glVertex3f(+a,-a,z); // reverse order quad to be shore that at least one passes by CULL_FACE glTexCoord2f(1.0,0.0); glVertex3f(+a,-a,z); glTexCoord2f(1.0,1.0); glVertex3f(+a,+a,z); glTexCoord2f(0.0,1.0); glVertex3f(-a,+a,z); glTexCoord2f(0.0,0.0); glVertex3f(-a,-a,z); glEnd(); //------------------------------------------------------------------ [edit2] ok here goes VAO/VBO render code,... //------------------------------------------------------------------------------ // enum of VBO locations (it is also your layout location) I use enums for simple in code changes enum _vbo_enum { _vbo_pos=0, // glVertex _vbo_col, // glColor _vbo_tan, // glNormal _vbo_unused0, // unused (at least i dont see anything at this location in your code) _vbo_unused1, // unused (at least i dont see anything at this location in your code) _vbo_txr, // glTexCoord _vbos }; //------------------------------------------------------------------------------ // 'global' names and size for OpenGL mesh in VAO/VBO ... similar ot texture names/handles GLuint vao[1],vbo[_vbos],num_pnt=0; //------------------------------------------------------------------------------ void VAO_init_cube() // call this before VAO use,...but after OpenGL init ! { //[1] first you need some model to render (mesh), here is a simple cube // size,position of cube - change it that it is visible in your scene const GLfloat a=1.0,x=0.0,y=0.0,z=0.0; // cube points 3f x,y,z GLfloat mesh_pos[]= { x-a,y-a,z-a,x-a,y+a,z-a,x+a,y+a,z-a,x+a,y-a,z-a, x-a,y-a,z+a,x-a,y+a,z+a,x+a,y+a,z+a,x+a,y-a,z+a, x-a,y-a,z-a,x-a,y-a,z+a,x+a,y-a,z+a,x+a,y-a,z-a, x-a,y+a,z-a,x-a,y+a,z+a,x+a,y+a,z+a,x+a,y+a,z-a, x-a,y-a,z-a,x-a,y+a,z-a,x-a,y+a,z+a,x-a,y-a,z+a, x+a,y-a,z-a,x+a,y+a,z-a,x+a,y+a,z+a,x+a,y-a,z+a, }; // cube colors 3f r,g,b GLfloat mesh_col[]= { 0.0,0.0,0.0,0.0,1.0,0.0,1.0,1.0,0.0,1.0,0.0,0.0, 0.0,0.0,1.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,1.0, 0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,1.0,1.0,0.0,0.0, 0.0,1.0,0.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,1.0, 1.0,0.0,0.0,1.0,1.0,0.0,1.0,1.0,1.0,1.0,0.0,1.0, }; // cube normals 3f x,y,z GLfloat mesh_tan[]= { -0.6,-0.6,-0.6,-0.6,+0.6,-0.6,+0.6,+0.6,-0.6,+0.6,-0.6,-0.6, -0.6,-0.6,+0.6,-0.6,+0.6,+0.6,+0.6,+0.6,+0.6,+0.6,-0.6,+0.6, -0.6,-0.6,-0.6,-0.6,-0.6,+0.6,+0.6,-0.6,+0.6,+0.6,-0.6,-0.6, -0.6,+0.6,-0.6,-0.6,+0.6,+0.6,+0.6,+0.6,+0.6,+0.6,+0.6,-0.6, -0.6,-0.6,-0.6,-0.6,+0.6,-0.6,-0.6,+0.6,+0.6,-0.6,-0.6,+0.6, +0.6,-0.6,-0.6,+0.6,+0.6,-0.6,+0.6,+0.6,+0.6,+0.6,-0.6,+0.6, }; // cube texture coords 2f s,t GLfloat mesh_txr[]= { 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, }; // init VAO/VBO glGenVertexArrays(1,vao); // allocate 1 x VAO glGenBuffers(_vbos,vbo); // allocate _vbos x VBO // copy mesh to VAO/VBO ... after this you do not need the mesh anymore GLint i,sz,n; // n = number of numbers per 1 entry glBindVertexArray(vao[0]); num_pnt=sizeof(mesh_pos)/(sizeof(GLfloat)*3); // num of all points in mesh i=_OpenGLVAOgfx_pos; n=3; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_pos,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); i=_OpenGLVAOgfx_col; n=3; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_col,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); i=_OpenGLVAOgfx_tan; n=3; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_tan,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); i=_OpenGLVAOgfx_txr; n=2; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_txr,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); glBindVertexArray(0); } //------------------------------------------------------------------------------ void VAO_draw() // call this to draw your mesh,... need to enable and bind textures,... before use { glDisable(GL_CULL_FACE); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glBindVertexArray(vao[0]); glEnableVertexAttribArray(_vbo_pos); glEnableVertexAttribArray(_vbo_col); glEnableVertexAttribArray(_vbo_tan); glDisableVertexAttribArray(_vbo_unused0); glEnableVertexAttribArray(_vbo_txr); glDrawArrays(GL_QUADS,0,num_pnt); glDisableVertexAttribArray(_vbo_pos); glDisableVertexAttribArray(_vbo_col); glDisableVertexAttribArray(_vbo_tan); glDisableVertexAttribArray(_vbo_unused0); glDisableVertexAttribArray(_vbo_unused1); glDisableVertexAttribArray(_vbo_txr); glBindVertexArray(0); } //------------------------------------------------------------------------------ void VAO_exit() // clean up ... call this when you do not need VAO/VBO anymore { glDisableVertexAttribArray(_vbo_pos); glDisableVertexAttribArray(_vbo_col); glDisableVertexAttribArray(_vbo_tan); glDisableVertexAttribArray(_vbo_unused0); glDisableVertexAttribArray(_vbo_unused1); glDisableVertexAttribArray(_vbo_txr); glBindVertexArray(0); glDeleteVertexArrays(1,vao); glDeleteBuffers(_vbos,vbo); } //------------------------------------------------------------------------------ [edit3] if you are win32/64 user you can try my IDE for GLSL It is very simple and easy to use, but cannot change texture/attrib locations. Press [F1] for help,... [F9] for run [F10] for return to normal OpenGL mode. Also txt-editor is little buggy sometimes but it is enough for my purpose. GLSL IDE
OpenGL Uniform buffers?
I'm trying to use uniform buffers but it doesn't work as supposed. I have two uniform buffers, one is lighting and the other is for material. The problem is that the colors aren't what they are supposed to be and they change every time I move the camera. This problem didn't exist when I used normal uniforms. Here's pictures to show what I mean: When using uniform buffers and when using normal uniforms! This is my fragment shader: #version 400 // Fragment Shader uniform layout(std140); in vec3 EyePosition; in vec3 EyeNormal; in vec2 TexCoord; out vec4 FragColor; uniform sampler2D Texture; uniform LightBlock { vec4 Position; vec4 Intensity; } Light; uniform MaterialBlock { vec4 Ambient; vec4 Diffuse; } Material; vec4 PointLight(in int i, in vec3 ECPosition, in vec3 ECNormal) { vec3 n = normalize(ECNormal); vec3 s = normalize(Light.Position.xyz - ECPosition); return Light.Intensity * (Material.Ambient + Material.Diffuse * max(dot(s, n), 0.0)); } void main() { FragColor = texture(Texture, TexCoord); FragColor *= PointLight(0, EyePosition, EyeNormal); } I'm not sure I have done everything right but here's how I create the uniform buffers: glGenBuffers(1, &light_buffer); glGenBuffers(1, &material_buffer); glBindBuffer(GL_UNIFORM_BUFFER, light_buffer); glBufferData(GL_UNIFORM_BUFFER, sizeof(LightBlock), nullptr, GL_DYNAMIC_DRAW); glBindBuffer(GL_UNIFORM_BUFFER, material_buffer); glBufferData(GL_UNIFORM_BUFFER, sizeof(MaterialBlock), nullptr, GL_DYNAMIC_DRAW); GLuint program = Shaders.GetProgram(); light_index = glGetUniformBlockIndex(program, "LightBlock"); material_index = glGetUniformBlockIndex(program, "MaterialBlock"); glUniformBlockBinding(program, light_index, 0); glUniformBlockBinding(program, material_index, 1); glBindBufferBase(GL_UNIFORM_BUFFER, 0, light_buffer); glBindBufferBase(GL_UNIFORM_BUFFER, 1, material_buffer); EDIT: Here's how I fill the buffers: // Global structures struct LightBlock { Vector4 Position; // Vector4 is a vector class I made Vector4 Intensity; }; struct MaterialBlock { Vector4 Ambient; Vector4 Diffuse; }; // This is called for every object rendered LightBlock Light; Light.Position = Vector3(0.0f, 5.0f, 5.0f) * Camera.GetCameraMatrix(); Light.Intensity = Vector4(1.0f); glBindBuffer(GL_UNIFORM_BUFFER, light_buffer); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(LightBlock), &Light); MaterialBlock Material; Material.Diffuse = Vector4(1.0f); Material.Ambient = Material.Diffuse * Vector4(0.3f); glBindBuffer(GL_UNIFORM_BUFFER, material_buffer); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(MaterialBlock), &Material);
I had the same problem, but only with AMD (not NVIDIA). The funny thing was that the problem only happened when changing the view matrix. As I had a repeatable problem depending on change of the view matrix, I was able to trace to the root cause (doing arduous trial and error). When changing the view in my application, I allocate and free some OpenGL resources dynamically depending on what is needed. In this process, there is a call to glDeleteBuffers() for buffer 0. If I use a conditional statement so as not to call glDeleteBuffers for buffer 0, then the problem goes away. According to documentation, buffer 0 will be silently ignored by glDeleteBuffers. My guess is that there is a bug in the AMD drivers.
Try to update the buffer using glMapBuffer/glUnmapBuffer.