Qt and OpenGL and draw a triangle if i use attributes - c++
I have a problem with a simple shader.
I plan to draw a triangle (one for a start) in color. What i want: i culculete color for each node of triangle and give it to vertex shader, then pass to fragmant and get a colorfull triangle. What i get is nothing - no triangle. So i decided to simplify a littel - i give parameters to shaders, but i not use them. And i get same result. It's C++ code:
QVector4D colors[3];
...
glBegin(GL_TRIANGLES);
invers_sh.setAttributeValue("b_color", colors[1]);
glVertex2d(0, 0);
invers_sh.setAttributeValue("b_color", colors[1]);
glVertex2d(2.0, 0);
invers_sh.setAttributeValue("b_color", colors[2]);
glVertex2d(0, 2.0);
glEnd();
Vertex shader:
in vec4 vertex;
attribute vec4 b_color;
varying vec4 color_v;
uniform mat4 qt_ModelViewProjectionMatrix;
void main( void )
{
gl_Position = qt_ModelViewProjectionMatrix * vertex;
color_v = b_color;
}
Fragment shader:
varying vec4 color_v;
void main( void )
{
gl_FragColor = vec4(1.0, 0, 0, 0);
}
I figured that i get my red triangle if i comment all setAttributeValue in C++ code and line
color_v = b_color;
in vertex shader.
Help me.
Can you test the following invers_sh.setAttributeValue("b_color", colors[0]);
=> replace that line with
invers_sh.setAttributeValue(b_colorLocation, colors[1]);
set a global for colorLocation
int b_colorLocation;
and add this to where you compile your shaders get location of b_color:
b_colorLocation = invers_sh.attributeLocation("b_color");
Related
OpenGL Fragment Shaders - Changing a fixed color
At the moment I have simple fragment shader which returns one color (red). If I want to change it a different RGBA color from C code, how should I be doing that? Is it possible to change an attribute within the fragment shader from C directly or should I be changing a solid color attribute in my vertex shader and then passing that color to the fragment shader? I'm drawing single solid colour rectangles - nothing special. void main() { gl_FragColor = vec4( 1.0, 0, 0, 1 );" }
If you are talking about generating the shader at runtime, then you COULD use the c string formatting functions to insert the color into the line "gl_FragColor..." I would not recommend you do this since it will be unneccessary work. The standard method to doing this is using uniforms as so: // fragment shader: uniform vec3 my_color; // A UNIFORM void main() { gl_FragColor.rgb = my_color; gl_FragColor.a = 1; // the alpha component } // your rendering code: glUseProgram(SHADER_ID); .... GLint color_location = glGetUniformLocation(SHADER_ID, "my_color"); float color[3] = {r, g, b}; glUniform3fv(color_location, 1, color); .... glDrawArrays(....);
Simple GLSL render chain doesn't draw reliably
I have a simple compositing system which is supposed to render different textures and a background texture into an FBO. It also renders some primitives. Here's an example: I'm rendering using a simple GLSL shader for the texture and another one for the primitive. Also, I'm waiting for each shader to finish using glFinish after each glDrawArrays call. So basically: tex shader (background tex) tex shader (tex 1) primitive shader tex shader (tex 2) tex shader (tex 3) When I only do this once, it works. But if I do another render pass directly after the first one finished, some textures just aren't rendered. The primitive however is always rendered. This doesn't happen always, but the more textures I draw, the more often this occurs. Thus, I'm assuming that this is a timing problem. I tried to troubleshoot for the last two days and I just can't find the reason for this. I'm 100% sure that the textures are always valid (I downloaded them using glGetTexImage to verify). Here are my texture shaders. Vertex shader: #version 150 uniform mat4 mvp; in vec2 inPosition; in vec2 inTexCoord; out vec2 texCoordV; void main(void) { texCoordV = inTexCoord; gl_Position = mvp * vec4(inPosition, 0.0, 1.0); } Fragment shader: #version 150 uniform sampler2D tex; in vec2 texCoordV; out vec4 fragColor; void main(void) { fragColor = texture(tex, texCoordV); } And here's my invocation: NSRect drawDestRect = NSMakeRect(xPos, yPos, str.texSize.width, str.texSize.height); NLA_VertexRect rect = NLA_VertexRectFromNSRect(drawDestRect); int texID = 0; NLA_VertexRect texCoords = NLA_VertexRectFromNSRect(NSMakeRect(0.0f, 0.0f, 1.0f, 1.0f)); NLA_VertexRectFlipY(&texCoords); [self.texApplyShader.arguments[#"inTexCoord"] setValue:&texCoords forNumberOfVertices:4]; [self.texApplyShader.arguments[#"inPosition"] setValue:&rect forNumberOfVertices:4]; [self.texApplyShader.arguments[#"tex"] setValue:&texID forNumberOfVertices:1]; GetError(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, str.texName); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); glFinish(); The setValue:forNumberOfCoordinates: function is an object-based wrapper around OpenGL's parameter application functions. It basically does this: glBindVertexArray(_vertexArrayObject); glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_DYNAMIC_DRAW); glEnableVertexAttribArray((GLuint)self.boundLocation); glVertexAttribPointer((GLuint)self.boundLocation, numVectorElementsForType, GL_FLOAT, GL_FALSE, 0, 0); Here are two screenshots of what it should look like (taken after first render pass) and what it actually looks like (taken after second render pass): https://www.dropbox.com/s/0nmquelzo83ekf6/GLRendering_issues_correct.png?dl=0 https://www.dropbox.com/s/7aztfba5mbeq5sj/GLRendering_issues_wrong.png?dl=0 (in this example, the background texture is just black) The primitive shader is as simple as it gets: Vertex: #version 150 uniform mat4 mvp; uniform vec4 inColor; in vec2 inPosition; out vec4 colorV; void main (void) { colorV = inColor; gl_Position = mvp * vec4(inPosition, 0.0, 1.0); } Fragment: #version 150 in vec4 colorV; out vec4 fragColor; void main(void) { fragColor = colorV; }
Found the issue... I didn't realize that the FBO is drawn to the screen already after the first render pass. This happens on a different thread and wasn't locked properly. Apparently the context was switched while the compositing took place which explains why it caused different issues randomly depending on when the second thread switched the context.
Overlaying a transparent color over a Texture with GLSL
I have an image that I am loading using the Slick library, and the image renders fine without my shader active. When I use my shader to overlay a transparent color over the image the entire image is replaced by the transparent color. without the shader With the shader Vertex Shader varying vec4 vertColor; void main(){ vec4 posMat = gl_Vertex; gl_Position = gl_ModelViewProjectionMatrix * posMat; vertColor = vec4(0.5, 1.0, 1.0, 0.2); } Fragment Shader varying vec4 vertColor; void main(){ gl_FragColor = vertColor; } Sprite Rendering Code Color.white.bind(); GL11.glBindTexture(GL11.GL_TEXTURE, image.getTextureID()); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0, 0); GL11.glVertex2f(this.x, this.y); GL11.glTexCoord2f(1, 0); GL11.glVertex2f(x + w, y); GL11.glTexCoord2f(1, 1); GL11.glVertex2f(x + w, y + h); GL11.glTexCoord2f(0, 1); GL11.glVertex2f(x, y + h); GL11.glEnd(); GL11.glBindTexture(GL11.GL_TEXTURE, 0); } OpenGL Initialization GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GL11.glOrtho(0, Screen.getW(), Screen.getH(), 0, -1, 1); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glEnable(GL11.GL_BLEND); GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
a) vertColor = vec4(0.5, 1.0, 1.0, 0.2); b) gl_FragColor = vertColor; the shader does exactly what you asked of it - it sets the color of all fragments to that color. If you want to blend colors, you should add/multiply them in the shader in some fashion (e.g. have a color attribute and/or texture sampler, and then, after exporting the attribute from vertex shader to fragment shader, use gl_FragColor = vertexColor * textureColor * blendColor; etc). also note: you're mixing fixed-function pipeline with immediate mode (glBegin/glEnd) with shaders... that's not a good idea. Also, I don't see where your uniforms are set; using shaders without uniforms == asking for trouble. IMO the best solution would be to either use regular OpenGL >= 3.1 with proper, compliant shaders etc. or only use fixed-function pipeline and no shaders with legacy OpenGL. As to how to load a texture with GLSL: (see https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/texturing.php for more info if needed) a) you feed the data to GPU by creating a texture & binding it to GPU texture unit by calling int id = glGenTexture(); glBindTexture( GL_TEXTURE_2D, id ); glTexImage2D( ... ); // see https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml for details (what I suppose you've already done, since you're using glBindTexture with image param already) b) you provide UV texture coordinates for your geometry; you're already doing it by supplying glTexCoord2f, which will probably allow you to use legacy attribute names as in https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php, but the proper way way would be to pass it as a part of packed attribute structure, c) you use the bound texture by sampling the texture in the shader, e.g. (legacy GLSL follows) // vertex shader varying vec2 vTexCoord; void main() { vTexCoord = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } // fragment shader uniform sampler2D texture; varying vec2 vTexCoord; void main() { vec4 colorMultiplier = vec4(0.5, 1.0, 1.0, 0.2); gl_FragColor = texture2D(texture, vTexCoord) * colorMultiplier; } still, if you intend on changing it at runtime, it'd be best to pass the colorMultiplier as a uniform.
How do I get textures to work in OpenGL?
I'm using the tutorials on http://arcsynthesis.org/gltut/ to learn OpenGL, it's required, I have to use it. Mostly I want to apply the textures from Tutorial 15 onto objects in tutorial 7 (world with UBO). For now it seemed like the textures only work when mipmaps are turned on. This comes with a downside: The only mipmap used is the one with an index of zero, and that's the 1 colored 1x1 pixel one. I tried setting the minimum level of a mipmap higher or turning off mipmaps entirely, but even that doesn't fix thing, because then everything turns pitch black. Now I'll list the most important parts of my program EDIT: I guess I'll add more details... The vertex shader has something like this: #version 330 layout(location = 0) in vec4 position; layout(location = 1) in vec4 color; layout(location = 2) in vec3 normal; //Added these later layout(location = 5) in vec2 texCoord; out vec2 colorCoord; smooth out vec4 interpColor; out vec3 vertexNormal; out vec3 modelSpacePosition; out vec3 cameraSpacePosition; uniform mat4 worldToCameraMatrix; uniform mat4 modelToWorldMatrix; uniform mat3 normalModelToCameraMatrix; uniform vec3 dirToLight; uniform vec4 lightIntensity; uniform vec4 ambientIntensity; uniform vec4 baseColor; uniform mat4 cameraToClipMatrix; void main() { vertexNormal = normal; vec3 normCamSpace = normalize(normalModelToCameraMatrix * vertexNormal); cameraSpacePosition = normCamSpace; float cosAngIncidence = dot(normCamSpace, dirToLight); cosAngIncidence = clamp(cosAngIncidence, 0, 1); modelSpacePosition.x = position.x; modelSpacePosition.y = position.y; modelSpacePosition.z = position.z; vec4 temp = modelToWorldMatrix * position; temp = worldToCameraMatrix * temp; gl_Position = cameraToClipMatrix * temp; interpColor = ((lightIntensity * cosAngIncidence) + (ambientIntensity)) * baseColor; colorCoord= texCoord ; } The fragment shader like this: #version 330 in vec3 vertexNormal; in vec3 modelSpacePosition; smooth in vec4 interpColor; uniform vec3 modelSpaceLightPos; uniform vec4 lightIntensity2; uniform vec4 ambientIntensity2; out vec4 outputColor; //Added later in vec2 colorCoord; uniform sampler2D colorTexture; void main() { vec3 lightDir2 = normalize(modelSpacePosition - modelSpaceLightPos); float cosAngIncidence2 = dot(normalize(vertexNormal), lightDir2); cosAngIncidence2 = clamp(cosAngIncidence2, 0, 1); float light2DistanceSqr = dot(modelSpacePosition - modelSpaceLightPos, modelSpacePosition - modelSpaceLightPos); //added vec4 texture2 = texture(colorTexture, colorCoord); outputColor = ((ambientIntensity2 + (interpColor*2))/4) + ((((interpColor) * lightIntensity2/200 * cosAngIncidence2) + (ambientIntensity2* interpColor )) /( ( sqrt(light2DistanceSqr) + light2DistanceSqr)/200 )); //No outputColor for texture testing outputColor = texture2 ; } } Those were both shaders. And here are the parts added to the .cpp: #include <glimg/glimg.h> #include "../framework/directories.h" [...] const int g_colorTexUnit = 0; GLuint g_checkerTexture = 0; And here's the loader for the texture: void LoadCheckerTexture() { try { std::string filename(LOCAL_FILE_DIR); filename += "checker.dds"; std::auto_ptr<glimg::ImageSet> pImageSet(glimg::loaders::dds::LoadFromFile(filename.c_str())); glGenTextures(1, &g_checkerTexture); glBindTexture(GL_TEXTURE_2D, g_checkerTexture); glimg::SingleImage image = pImageSet->GetImage(0, 0, 0); glimg::Dimensions dims = image.GetDimensions(); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, dims.width, dims.height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, image.GetImageData()); glBindTexture(GL_TEXTURE_2D, 0); } catch(std::exception &e) { printf("%s\n", e.what()); throw; } } Naturally I've got this in void init(): LoadCheckerTexture(); And then when rendering the object: glActiveTexture(GL_TEXTURE0 + g_colorTexUnit); glBindTexture(GL_TEXTURE_2D,g_checkerTexture); g_pLeftMesh->Render(); glBindSampler(g_colorTexUnit, 0); glBindTexture(GL_TEXTURE_2D, 0); With all of this, I get put pitch black for everything, however when I change the outputColor equation into "texture + outputColor;", everything looks normal. I have no idea what I'm doing wrong here. A friend tried to help me, we removed some unnecessairy stuff, but we got nothing running.
Ok guys, I've worked on this whole thing, and did manage to somehow get it running. First off I had to add samplers: GLuint g_samplers; //Add Later void CreateSamplers() { glGenSamplers(1, &g_samplers); glSamplerParameteri(g_samplers, GL_TEXTURE_WRAP_S, GL_REPEAT); glSamplerParameteri(g_samplers, GL_TEXTURE_WRAP_T, GL_REPEAT); //Linear mipmap Nearest glSamplerParameteri(g_samplers, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glSamplerParameteri(g_samplers, GL_TEXTURE_MIN_FILTER, GL_NEAREST); } I also added this to the file thing: glimg::OpenGLPixelTransferParams xfer = glimg::GetUploadFormatType(pImageSet->GetFormat(), 0); glimg::SingleImage image = pImageSet->GetImage(0, 0, 0); glimg::Dimensions dims = image.GetDimensions(); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, dims.width, dims.height, 0, xfer.format, xfer.type, image.GetImageData()); The xfer variable does get the format and type adjusted to the dds. Also the render code got turned into this: //Added necessary glActiveTexture(GL_TEXTURE0 + g_colorTexUnit); glBindTexture(GL_TEXTURE_2D,g_checkerTexture); glBindSampler(g_colorTexUnit, g_samplers); g_pLeftMesh->Render(); glBindSampler(g_colorTexUnit, 0); glBindTexture(GL_TEXTURE_2D, 0); And of course at the end of init() I needed to add the CreateSamplers thing: //Added this later LoadCheckerTexture(); CreateSamplers(); I'm sorry for all the trouble with all this, but guess OpenGL really is just this confusing and it was just dumb luck that I got it right. Just posting this so that people know
Your fail to add textures may be caused by: Have you add texture coordinates to objects? (this is the most probable cause, because you are adding textures to non textured tutorial), add textures to VAO. Did you add uniform textureunit (Sampler2D)? (it must be uniform, else texturing will not work properly) Is your texture loaded,binded,enabled (GL_TEXTURE_2D) ? Is your active texture unit - 0? if not change layout/multitexture coords or set active texture 0 This two codes are simple texturing shaders (texture unit 0) no special things (like light,blend,bump,...): tm_l2g is transformation local obj space -> world space (Modelview) tm_g2s is transformation world space -> screen space (Projection) pos are vertex coordinates txt are texture coordinates col are colors Do not forget to change uniform names and layout locations to yours. Vertex: //------------------------------------------------------------------ #version 420 core //------------------------------------------------------------------ uniform mat4x4 tm_l2g; uniform mat4x4 tm_g2s; layout(location=0) in vec3 pos; layout(location=1) in vec4 col; layout(location=2) in vec2 txr; out smooth vec4 pixel_col; out smooth vec2 pixel_txr; //------------------------------------------------------------------ void main(void) { vec4 p; p.xyz=pos; p.w=1.0; p=tm_l2g*p; p=tm_g2s*p; gl_Position=p; pixel_col=col; pixel_txr=txr; } //------------------------------------------------------------------ fragment: //------------------------------------------------------------------ #version 420 core //------------------------------------------------------------------ in smooth vec4 pixel_col; in smooth vec2 pixel_txr; uniform sampler2D txr_texture0; out layout(location=0) vec4 frag_col; //------------------------------------------------------------------ void main(void) { vec4 col; col=texture(txr_texture0,pixel_txr.st); frag_col=col*pixel_col; } //------------------------------------------------------------------ [edit1] CPU old style OpenGL render code (initializations are not included its only render code they can be found here) //------------------------------------------------------------------ // set modelview,projection,textures,bind GLSL programs... GLfloat a=10.0,z=0.0; glColor3f(1.0,1.0,1.0); glBegin(GL_QUADS); // textured quad glTexCoord2f(0.0,0.0); glVertex3f(-a,-a,z); glTexCoord2f(0.0,1.0); glVertex3f(-a,+a,z); glTexCoord2f(1.0,1.0); glVertex3f(+a,+a,z); glTexCoord2f(1.0,0.0); glVertex3f(+a,-a,z); // reverse order quad to be shore that at least one passes by CULL_FACE glTexCoord2f(1.0,0.0); glVertex3f(+a,-a,z); glTexCoord2f(1.0,1.0); glVertex3f(+a,+a,z); glTexCoord2f(0.0,1.0); glVertex3f(-a,+a,z); glTexCoord2f(0.0,0.0); glVertex3f(-a,-a,z); glEnd(); //------------------------------------------------------------------ [edit2] ok here goes VAO/VBO render code,... //------------------------------------------------------------------------------ // enum of VBO locations (it is also your layout location) I use enums for simple in code changes enum _vbo_enum { _vbo_pos=0, // glVertex _vbo_col, // glColor _vbo_tan, // glNormal _vbo_unused0, // unused (at least i dont see anything at this location in your code) _vbo_unused1, // unused (at least i dont see anything at this location in your code) _vbo_txr, // glTexCoord _vbos }; //------------------------------------------------------------------------------ // 'global' names and size for OpenGL mesh in VAO/VBO ... similar ot texture names/handles GLuint vao[1],vbo[_vbos],num_pnt=0; //------------------------------------------------------------------------------ void VAO_init_cube() // call this before VAO use,...but after OpenGL init ! { //[1] first you need some model to render (mesh), here is a simple cube // size,position of cube - change it that it is visible in your scene const GLfloat a=1.0,x=0.0,y=0.0,z=0.0; // cube points 3f x,y,z GLfloat mesh_pos[]= { x-a,y-a,z-a,x-a,y+a,z-a,x+a,y+a,z-a,x+a,y-a,z-a, x-a,y-a,z+a,x-a,y+a,z+a,x+a,y+a,z+a,x+a,y-a,z+a, x-a,y-a,z-a,x-a,y-a,z+a,x+a,y-a,z+a,x+a,y-a,z-a, x-a,y+a,z-a,x-a,y+a,z+a,x+a,y+a,z+a,x+a,y+a,z-a, x-a,y-a,z-a,x-a,y+a,z-a,x-a,y+a,z+a,x-a,y-a,z+a, x+a,y-a,z-a,x+a,y+a,z-a,x+a,y+a,z+a,x+a,y-a,z+a, }; // cube colors 3f r,g,b GLfloat mesh_col[]= { 0.0,0.0,0.0,0.0,1.0,0.0,1.0,1.0,0.0,1.0,0.0,0.0, 0.0,0.0,1.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,1.0, 0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,1.0,1.0,0.0,0.0, 0.0,1.0,0.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,1.0, 1.0,0.0,0.0,1.0,1.0,0.0,1.0,1.0,1.0,1.0,0.0,1.0, }; // cube normals 3f x,y,z GLfloat mesh_tan[]= { -0.6,-0.6,-0.6,-0.6,+0.6,-0.6,+0.6,+0.6,-0.6,+0.6,-0.6,-0.6, -0.6,-0.6,+0.6,-0.6,+0.6,+0.6,+0.6,+0.6,+0.6,+0.6,-0.6,+0.6, -0.6,-0.6,-0.6,-0.6,-0.6,+0.6,+0.6,-0.6,+0.6,+0.6,-0.6,-0.6, -0.6,+0.6,-0.6,-0.6,+0.6,+0.6,+0.6,+0.6,+0.6,+0.6,+0.6,-0.6, -0.6,-0.6,-0.6,-0.6,+0.6,-0.6,-0.6,+0.6,+0.6,-0.6,-0.6,+0.6, +0.6,-0.6,-0.6,+0.6,+0.6,-0.6,+0.6,+0.6,+0.6,+0.6,-0.6,+0.6, }; // cube texture coords 2f s,t GLfloat mesh_txr[]= { 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0,1.0,1.0,1.0,0.0, }; // init VAO/VBO glGenVertexArrays(1,vao); // allocate 1 x VAO glGenBuffers(_vbos,vbo); // allocate _vbos x VBO // copy mesh to VAO/VBO ... after this you do not need the mesh anymore GLint i,sz,n; // n = number of numbers per 1 entry glBindVertexArray(vao[0]); num_pnt=sizeof(mesh_pos)/(sizeof(GLfloat)*3); // num of all points in mesh i=_OpenGLVAOgfx_pos; n=3; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_pos,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); i=_OpenGLVAOgfx_col; n=3; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_col,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); i=_OpenGLVAOgfx_tan; n=3; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_tan,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); i=_OpenGLVAOgfx_txr; n=2; sz=sizeof(GLfloat)*n; glBindBuffer(GL_ARRAY_BUFFER,vbo[i]); glBufferData(GL_ARRAY_BUFFER,sz*num_pnt,mesh_txr,GL_STATIC_DRAW); glEnableVertexAttribArray(i); glVertexAttribPointer(i,n,GL_FLOAT,GL_FALSE,0,0); glBindVertexArray(0); } //------------------------------------------------------------------------------ void VAO_draw() // call this to draw your mesh,... need to enable and bind textures,... before use { glDisable(GL_CULL_FACE); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glBindVertexArray(vao[0]); glEnableVertexAttribArray(_vbo_pos); glEnableVertexAttribArray(_vbo_col); glEnableVertexAttribArray(_vbo_tan); glDisableVertexAttribArray(_vbo_unused0); glEnableVertexAttribArray(_vbo_txr); glDrawArrays(GL_QUADS,0,num_pnt); glDisableVertexAttribArray(_vbo_pos); glDisableVertexAttribArray(_vbo_col); glDisableVertexAttribArray(_vbo_tan); glDisableVertexAttribArray(_vbo_unused0); glDisableVertexAttribArray(_vbo_unused1); glDisableVertexAttribArray(_vbo_txr); glBindVertexArray(0); } //------------------------------------------------------------------------------ void VAO_exit() // clean up ... call this when you do not need VAO/VBO anymore { glDisableVertexAttribArray(_vbo_pos); glDisableVertexAttribArray(_vbo_col); glDisableVertexAttribArray(_vbo_tan); glDisableVertexAttribArray(_vbo_unused0); glDisableVertexAttribArray(_vbo_unused1); glDisableVertexAttribArray(_vbo_txr); glBindVertexArray(0); glDeleteVertexArrays(1,vao); glDeleteBuffers(_vbos,vbo); } //------------------------------------------------------------------------------ [edit3] if you are win32/64 user you can try my IDE for GLSL It is very simple and easy to use, but cannot change texture/attrib locations. Press [F1] for help,... [F9] for run [F10] for return to normal OpenGL mode. Also txt-editor is little buggy sometimes but it is enough for my purpose. GLSL IDE
Combining two texture in fragment shader
I'm working on implementing deferred shading to my game. I have rendered the diffuse textures to a render target, and I have lighting rendered to a render target. Both of which I know are fine because I can render them straight to the screen with no problems. What I want to do is combine both the diffuse map and the light map in a shader to create a final image. Here is my current fragment shader, which results in a black screen. #version 110 uniform sampler2D diffuseMap; uniform sampler2D lightingMap; void main() { vec4 color = texture(diffuseMap, gl_TexCoord[0].st); vec4 lighting = texture(lightingMap, gl_TexCoord[0].st); vec4 finalColor = color; gl_FragColor = finalColor; } Shouldn't this result in the same thing as just straight up drawing the diffuse map? I set the sampler2d with this method void ShaderProgram::setUniformTexture(const std::string& name, GLint t) { GLint var = getUniformLocation(name); glUniform1i(var, t); } GLint ShaderProgram::getUniformLocation(const std::string& name) { if(mUniformValues.find(name) != mUniformValues.end()) { return mUniformValues[name]; } GLint var = glGetUniformLocation(mProgram, name.c_str()); mUniformValues[name] = var; return var; } EDIT: Some more information. Here is the code where I use the shader. I set the two textures, and draw a blank square for the shader to use. I know for sure, my render targets are working, as I said before, because I can draw them fine using the same getTextureId as I do here. graphics->useShader(mLightingCombinedShader); mLightingCombinedShader->setUniformTexture("diffuseMap", mDiffuse->getTextureId()); mLightingCombinedShader->setUniformTexture("lightingMap", mLightMap->getTextureId()); graphics->drawPrimitive(mScreenRect, 0, 0); graphics->clearShader(); void GraphicsDevice::useShader(ShaderProgram* p) { glUseProgram(p->getId()); } void GraphicsDevice::clearShader() { glUseProgram(0); } And the vertex shader #version 110 varying vec2 texCoord; void main() { texCoord = gl_MultiTexCoord0.xy; gl_Position = ftransform(); }
In GLSL version 110 you should use: texture2D(diffuseMap, gl_TexCoord[0].st); // etc. instead of just the texture function. And then to combine the textures, just multiply the colours together, i.e. gl_FragColor = color * lighting;
glUniform1i(var, t); The glUniform functions affect the program that is currently in use. That is, the last program that glUseProgram was called on. If you want to set the uniform for a specific program, you have to use it first.
The problem ended up being that I didn't enable the texture coordinates for the screen rectangle I was drawing.