Multiple texture units with fragment shader not working - c++

I need to display some indexed graphic file, that additionally has per-pixel alpha channel. Also, I need to make sure that I can change the palette at any time and the resulting image will also change. For this, I first used software pixel precomputing, but that was just too slow for realtime rendering, so I decided to write a shader that will handle indexed textures on GPU-side. The problem is that the second texture (rec_colors) doesn't load (at least it seems like so — every texel read from that sampler appears completely empty).
Data from zero texture reads correctly, resulting in black image with right alpha :)
Shader-initializing-related code:
Application::Display->GetRC();
glewInit();
if(!GLEW_VERSION_2_0) return false;
char* code_frag = loadCode("shader.frag");
char* code_verx = loadCode("shader.verx");
aShader_palette = glCreateShader(GL_FRAGMENT_SHADER);
//glShaderSource(aShader_palette, 1, &aShaderProgram_palette, NULL);
glShaderSource(aShader_palette, 1, (const GLchar**)&code_frag, NULL);
glCompileShader(aShader_palette);
GLint compiled = 0;
glGetShaderiv(aShader_palette, GL_COMPILE_STATUS, &compiled);
if(!compiled)
{
/* error-handling */
}
GLuint texloc = glGetUniformLocation(aShader_palette, "rec");
glUniform1i(texloc, 0);
texloc = glGetUniformLocation(aShader_palette, "rec_colors");
glUniform1i(texloc, 1);
glsl_palette_Program = glCreateProgram();
glAttachShader(glsl_palette_Program, aShader_palette);
glLinkProgram(glsl_palette_Program);
And rendering-related:
glPushAttrib(GL_CURRENT_BIT);
glColor4ub(255, 255, 255, t_a); // t_a is overall alpha of sprite displayed
glUseProgram(glsl_palette_Program); // this one is a compiled/linked shader declared above
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, this->m_SpriteData[idx].texture);
glActiveTexture(GL_TEXTURE1); // at this point, it looks like texture unit is actually changed (I checked that via glGetIntegerv)
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, this->m_PaletteTex);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, 256, 1, GL_RGBA, GL_UNSIGNED_BYTE, palette); // update possibly changed palette on each render
glActiveTexture(GL_TEXTURE0);
glBegin(GL_QUADS);
glTexCoord2i(0, 0);
glVertex2i(x, y);
glTexCoord2i(0, this->GetHeight(idx));
glVertex2i(x, y+this->GetHeight(idx));
glTexCoord2i(this->GetWidth(idx), this->GetHeight(idx));
glVertex2i(x+this->GetWidth(idx), y+this->GetHeight(idx));
glTexCoord2i(this->GetWidth(idx), 0);
glVertex2i(x+this->GetWidth(idx), y);
glEnd();
glActiveTexture(GL_TEXTURE1);
glUnbindTexture(GL_TEXTURE_RECTANGLE_ARB); // custom macro
glActiveTexture(GL_TEXTURE0);
glUnbindTexture(GL_TEXTURE_RECTANGLE_ARB);
glUseProgram(0);
glPopAttrib();
Shader code:
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect rec;
uniform sampler2DRect rec_colors;
void main(void)
{
vec4 oldcol = texture2DRect(rec, gl_TexCoord[0].st);
vec4 newcol = texture2DRect(rec_colors, vec2(oldcol.r*255.0, 0.0)); // palette index should be*255 bcs rectangle coordinates aren't normalized
gl_FragColor.rgb = newcol.rgb;
gl_FragColor.a = oldcol.g; // alpha from green part
}
Googled a lot, any similar posts I found were solved by fixing texture unit IDs in glUniform1i call, but for me that looks absolutely normal (at least, TEXTURE0 loads correctly into rec).

Do you check for errors anywhere with glGetError? I belive you're doing something incorrectly. glGetUniformLocation is supposed to be executed against a linked program, not a shader. You're calling glGetUniformLocation before your program is linked.
See relevant text from man page: http://www.opengl.org/wiki/GLAPI/glGetUniformLocation
The actual locations assigned to uniform variables are not
known until the program object is linked successfully. After
linking has occurred, the command
glGetUniformLocation can be used to obtain
the location of a uniform variable. Uniform variable locations and values can only
be queried after a link if the link was successful.
You should always at the very least check for opengl errors with glGetError once per frame during development. It will alert you to these problems before you have to go online to ask for help.

Related

OpenGL access DepthComponent Texture in GLSL 400

I'm trying to access a DepthComponent Texture in my GLSL Shader of version 400.
The program does a two pass rendering. In the first pass I render all the geometry and colors to a Framebuffer on which I have a ColorAttachment and DepthAttachment. The DepthAttachment is bound like this:
(Note: I'm using C# with OpenTK, which is strongly typed, in my code examples.)
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment.DepthAttachment, TextureTarget.Texture2D, depthTexture.ID, 0);
The depth Texture has an internal pixel format of DepthComponent32f, pixel format of DepthComponent and Float as pixel type. All the other properties have default values.
The second pass renders the framebuffers color image onto the screen using the following shader:
#version 400
uniform sampler2D finalImage;
in vec2 texCoords;
out vec4 fragColor;
void main(){
fragColor = vec4(texture2D(finalImage, texCoords.xy).rgb, 1.0);
}
But now I want to read the depth Texture(DepthComponent) instead of the color Texture(RGBA).
I tried a lot of things like disabling TextureCompareMode, using shadow2DSampler with shadow2DProj(sampler, vec4(texCoords.xy, 0.0, 1.0)) or just textureProj(sampler, vec3(texCoords.xy, 0.0)). But it returns only 1 or 0, depends on which configuration I use.
To be sure that my depth Texture is ok, I've read the pixels back to a float array like this:
GL.ReadPixels(0, 0, depthTexture.Width, depthTexture.Height, PixelFormat.DepthComponent, PixelType.Float, float_array);
Everything seems to be correct, its showing me 1.0 for empty space and values between 0.99 and 1.0 for visible objects.
Edit
Here is a code example how my process looks like:
Init code
depthTexture= new GLEXTexture2D(width, height);
depthTexture.TextureCompareMode = TextureCompareMode.None;
depthTexture.CreateMutable(PixelInternalFormat.DepthComponent32f, PixelFormat.DepthComponent, PixelType.Float);
***CreateMutable Function***
ReserveTextureID();
GLEX.glBeginTexture2D(ID);
GL.TexImage2D(TextureTarget.Texture2D, 0, pInternalFormat, width, height, 0, pFormat, pType, IntPtr.Zero);
ApplyOptions();
MarkReserved(true);
GLEX.glEndTexture2D();
(Framebuffer attachment mentioned above)
Render pass 1
GL.BindFramebuffer(FramebufferTarget.Framebuffer, drawBuffer.ID);
GL.Viewport(0, 0, depthTexture.Width, depthTexture.Height);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit | ClearBufferMask.StencilBufferBit);
GL.Enable(EnableCap.DepthTest);
GL.ClearColor(Color.Gray);
GL.UseProgram(geometryPassShader.ID);
geometry_shaderUniformMVPM.SetValueMat4(false, geometryImageMVMatrix * geometryImageProjMatrix);
testRectangle.Render(PrimitiveType.QuadStrip);
GL.UseProgram(0);
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
Render pass 2
GL.Viewport(0, 0, depthTexture.Width, depthTexture.Height);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit | ClearBufferMask.StencilBufferBit);
GL.ClearColor(Color.White);
GL.UseProgram(finalImageShader.ID);
GL.ActiveTexture(TextureUnit.Texture0);
depthTexture.Bind();
final_shaderUniformMVPM.SetValueMat4(false, finalImageMatrix);
screenQuad.Render(PrimitiveType.Quads);
GL.UseProgram(0);
GL.BindTexture(TextureTarget.Texture2D, 0);
A few hours later I found the solution.
The Problem was the MinFilter. Like the khronos group said on glTexParameter:
The initial value of GL_TEXTURE_MIN_FILTER is GL_NEAREST_MIPMAP_LINEAR.
I changed the MinFilter of my depth Texture to GL_NEAREST (where GL_LINEAR is also legal) and now the depth values in the GLSL shader are right (after linearization of course).
Additional Info:
There are some extensions for MagFilter like LINEAR_DETAIL_ALPHA_SGIS. I`ve tried some of these, the depth value correctness was not affected.

glDrawElements on OSX has high cpu usage

I've been primarily developing on Linux (Mint) and Windows using SDL2 and OpenGL 3.3, with few issues in regards to drawing objects. CPU usage never really spiking past ~40%.
That was, until I tried porting what I had to OSX (Sierra).
Utilizing the exact same shader and code that runs on Linux and Windows just fine, spikes the cpu usage on OSX to ~99% consistently.
At first, I thought it was a batching issue, so I batched my draw calls together to minimize the number of calls to glDrawElements, and that didn't work.
Then, I thought it was an issue involving not using attributes in the vertex/fragment shader (like: OpenGL core profile incredible slowdown on OS X)
Also, I maintain the framerate at 60 fps.
After sorting that out, no luck. Tried logging everything I could, nothing from glGetError() nor from shader logs.
So I removed bits and pieces from my vertex/fragment shaders to see what in particular was slowing down my draw calls. I managed to reduce it down to this: Any call in either my vertex/fragment shaders to the texture() function will run the cpu to high usage.
Texture loading code:
// Texture loading
void PCShaderSurface::AddTexturePairing(HashString const &aName)
{
GLint minFilter = GL_LINEAR;
GLint magFilter = GL_LINEAR;
GLint wrapS = GL_REPEAT;
GLint wrapT = GL_REPEAT;
if(Constants::GetString("OpenGLMinFilter") == "GL_NEAREST")
{
minFilter = GL_NEAREST;
}
if(Constants::GetString("OpenGLMagFilter") == "GL_NEAREST")
{
magFilter = GL_NEAREST;
}
if(Constants::GetString("OpenGLWrapModeS") == "GL_CLAMP_TO_EDGE")
{
wrapS = GL_CLAMP_TO_EDGE;
}
if(Constants::GetString("OpenGLWrapModeT") == "GL_CLAMP_TO_EDGE")
{
wrapT = GL_CLAMP_TO_EDGE;
}
glGenTextures(1, &mTextureID);
glBindTexture(GL_TEXTURE_2D, mTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapS);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mSurface->w, mSurface->h, 0, mTextureFormat, GL_UNSIGNED_BYTE, mSurface->pixels);
GetManager()->AddTexturePairing(aName, TextureData(mTextureID, mSurface->w, mSurface->h));
}
Draw Code:
// I batch objects that use the same program and texture id to draw in the same call.
glUseProgram(program);
int activeTexture = texture % mMaxTextures;
int vertexPosLocation = glGetAttribLocation(program, "vertexPos");
int texCoordPosLocation = glGetAttribLocation(program, "texCoord");
int objectPosLocation = glGetAttribLocation(program, "objectPos");
int colorPosLocation = glGetAttribLocation(program, "primaryColor");
// Calculate matrices and push vertex, color, position, texCoord data
// ...
// Enable textures and set uniforms.
glBindVertexArray(mVertexArrayObjectID);
glActiveTexture(GL_TEXTURE0 + activeTexture);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(glGetUniformLocation(program, "textureUnit"), activeTexture);
glUniform3f(glGetUniformLocation(program, "cameraDiff"), cameraTranslation.x, cameraTranslation.y, cameraTranslation.z);
glUniform3f(glGetUniformLocation(program, "cameraSize"), cameraSize.x, cameraSize.y, cameraSize.z);
glUniformMatrix3fv(glGetUniformLocation(program, "cameraTransform"), 1, GL_TRUE, cameraMatrix);
// Set shader properties. Due to batching, done on a per surface / shader basis.
// Shader uniforms are reset upon relinking.
SetShaderProperties(surface, true);
// Set VBO and buffer data.
glBindVertexArray(mVertexArrayObjectID);
BindAttributeV3(GL_ARRAY_BUFFER, mVertexBufferID, vertexPosLocation, vertexData);
BindAttributeV3(GL_ARRAY_BUFFER, mTextureBufferID, texCoordPosLocation, textureData);
BindAttributeV3(GL_ARRAY_BUFFER, mPositionBufferID, objectPosLocation, positionData);
BindAttributeV4(GL_ARRAY_BUFFER, mColorBufferID, colorPosLocation, colorData);
// Set index data
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * indices.size(), &indices[0], GL_DYNAMIC_DRAW);
// Draw and disable
glDrawElements(GL_TRIANGLES, static_cast<unsigned>(vertexData.size()), GL_UNSIGNED_INT, 0);
DisableVertexAttribArray(vertexPosLocation);
DisableVertexAttribArray(texCoordPosLocation);
DisableVertexAttribArray(objectPosLocation);
DisableVertexAttribArray(colorPosLocation);
// Reset shader property values.
SetShaderProperties(surface, false);
// Reset to default texture
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glUseProgram(0);
Example binding code:
void PCShaderScreen::BindAttributeV3(GLenum aTarget, int const aBufferID, int const aAttributeLocation, std::vector<Vector3> &aData)
{
if(aAttributeLocation != -1)
{
glEnableVertexAttribArray(aAttributeLocation);
glBindBuffer(aTarget, aBufferID);
glBufferData(aTarget, sizeof(Vector3) * aData.size(), &aData[0], GL_DYNAMIC_DRAW);
glVertexAttribPointer(aAttributeLocation, 3, GL_FLOAT, GL_FALSE, sizeof(Vector3), 0);
glBindBuffer(aTarget, 0);
}
}
VS code:
#version 330
in vec4 vertexPos;
in vec4 texCoord;
in vec4 objectPos;
in vec4 primaryColor;
uniform vec3 cameraDiff;
uniform vec3 cameraSize;
uniform mat3 cameraTransform;
out vec2 texValues;
out vec4 texColor;
void main()
{
texColor = primaryColor;
texValues = texCoord.xy;
vec3 vertex = vertexPos.xyz + objectPos.xyz;
vertex = (cameraTransform * vertex) - cameraDiff;
vertex.x /= cameraSize.x;
vertex.y /= -cameraSize.y;
vertex.y += 1.0;
vertex.x -= 1.0;
gl_Position.xyz = vertex;
gl_Position.w = 1.0;
}
FS code:
#version 330
uniform sampler2D textureUnit;
in vec2 texValues;
in vec4 texColor;
out vec4 fragColor;
void main()
{
// Slow, 99% CPU usage on OSX only
fragColor = texture(textureUnit, texValues) * texColor;
// Fine on everything
fragColor = vec4(1,1,1,1);
}
I'm really out of ideas here, I even followed Apple's best practices (https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_texturedata/opengl_texturedata.html) as best as I could, with no luck.
Are the Windows and Linux drivers I'm using just offering me some form of forgiveness that I'm not aware of? Is the OSX driver really that sensitive? I must be missing something. Any help and insight would be appreciated. Thanks for reading my long winded speech.
All credit to #keltar for finding this, but my problem was in the glActiveTexture call.
I changed the call from glActiveTexture(GL_TEXTURE0 + activeTexture) to just glActiveTexture(GL_TEXTURE0).
To paraphrase #keltar: "Constantly changing the texture slot number might force driver to recompile shader each time. I don't think it matters which exact value it would be, as long as it doesn't change (and within GL limits). I suppose hardware that you use can't effectively (or at all) sample texture from any slot specified by uniform variable - but GL implies so. On some hardware e.g. fetching vertex attributes is internally part of shader too. When state changes, driver attempts to patch shader, but if change is too big (or driver don't know how to patch) - it falls to recompilation. Sadly OSX graphics drivers aren't known to be good, to my knowledge."
You do a lot of gl-calls in your draw code: binding buffers, uploading data to buffers, etc. Most of them would be better done when preparing or uploading data.
I prefer to do in the draw code just:
glUseProgram(program);
Enable de VAO by glBindVertexArray
Pass uniforms
Active texture units by glActiveTexture
glDrawXXX commands
glUseProgram(0);
Disable de VAO

GLSL, combining 2D and 3D textures

I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.

OpenGL, Shader Model 3.3 Texturing: Black Textures?

I've been banging my head against this for hours now, I'm sure it's something simple, but I just can't get a result. I've had to edit this code down a bit because I've built a little library to encapsulate the OpenGL calls, but the following is an accurate description of the state of affairs.
I'm using the following vertex shader:
#version 330
in vec4 position;
in vec2 uv;
out vec2 varying_uv;
void main(void)
{
gl_Position = position;
varying_uv = uv;
}
And the following fragment shader:
#version 330
in vec2 varying_uv;
uniform sampler2D base_texture;
out vec4 fragment_colour;
void main(void)
{
fragment_colour = texture2D(base_texture, varying_uv);
}
Both shaders compile and the program links without issue.
In my init section, I load a single texture like so:
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
// Load an image.
QImage image("G:/test_image.png");
image = image.convertToFormat(QImage::Format_RGB888);
if(!image.isNull())
{
// Load up a single texture.
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, image.width(), image.height(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.constBits());
glBindTexture(GL_TEXTURE_2D, 0);
}
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
You'll observe that I'm using Qt to load the texture. The calls to ::throw_on_error() check for errors in OpenGL (by calling Error()), and throw an exception if one occurs. No OpenGL errors occur in this code, and the image loaded using Qt is valid.
Drawing is performed as follows:
// Clear previous.
glClear(GL_COLOR_BUFFER_BIT |
GL_DEPTH_BUFFER_BIT |
GL_STENCIL_BUFFER_BIT);
// Use our program.
glUseProgram(GLProgram);
// Bind the vertex array.
glBindVertexArray(GLVertexArray);
/* ------------------ Setting active texture here ------------------- */
// Tell the shader which textures are which.
kt::kits::open_gl::gl_int tAddr = glGetUniformLocation(GLProgram, "base_texture");
glUniform1i(tAddr, 0);
// Activate the texture Texture(0) as texture 0.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, Texture);
/* ------------------------------------------------------------------ */
// Draw vertex array as triangles.
glDrawArrays(GL_TRIANGLES, 0, 4);
glBindVertexArray(0);
glUseProgram(0);
// Detect errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
Similarly, no OpenGL errors occur, and a triangle is drawn to screeen. However, it looks like this:
It occurred to me the problem may be related to my texture coordinates. So, I rendered the following image using s as the 'red' component, and t as the 'green' component:
The texture coordinates appear correct, yet I'm still receiving the black triangle of doom. What am I doing wrong?
I think it could be depending on an incomplete init of your texture object.
Try to init the texture MIN and MAG filter
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Moreover, I would suggest to check the size of the texture. If it is not power of 2, then you have to set the wrapping mode to CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
Black textures are often due to this issue, very common problem around.
Ciao
In your fragment shader you're writing to a self defined target
fragment_colour = texture2D(base_texture, varying_uv);
If that's not to be gl_FragColor or gl_FragData[…], did you properly set the designated fragment data location?

Texture rendering and VBO's [OpenGL/SDL/C++]

So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.
The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.
Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.