PNG loaded with SDL_image with glTexImage2D segfaults - opengl

I don't think this is a problem with the image I'm loading. The image resolution is 256x256 so it's not the power of 2 issue. I've looked at other segfaults people got with glTexImage2D and the segfault still can't seem to go away. Sorry that the code isn't in C/C++ (i don't think this is the problem either) but it should still be easy to understand.
let surface = sdlimage.load("image.png") # equivalent to IMG_Load call in C
if surface.isNil:
echo "Image couldn't be loaded: ", sdl2.getError()
quit 1
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glEnable(GL_BLEND)
var tex: cuint
glGenTextures(1, addr tex)
glBindTexture(GL_TEXTURE_2D, tex)
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
glPixelStorei(GL_UNPACK_ROW_LENGTH, surface.pitch)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA.GLint, surface.w.GLsizei, surface.h.GLsizei, 0, GL_RGBA, GL_UNSIGNED_BYTE, surface.pixels) # segfault
UPDATE:
I managed to get it to not segfault thanks to user1118321's answer (I checked the pixel format and saw that SDL used a "indexed" format, which I saw another user said was the problem when they fixed this issue for theirselves. Seems like creating a new surface is the right solution), but now it shows nothing on the screen. It's black when I set glClearColor to black, and it's white when i set glClearColor to white.
Updated image loading code:
let surface = sdlimage.load("image.png")
if surface.isNil:
echo "Image couldn't be loaded: ", sdl2.getError()
quit 1
var w = surface.w # may need to make this the next power of two
var h = surface.h # and this
var bpp: cint
var Rmask, Gmask, Bmask, Amask: uint32
if not pixelFormatEnumToMasks(SDL_PIXELFORMAT_ABGR8888, bpp,
Rmask, Gmask, Bmask, Amask):
quit "pixel format enum to masks " & $sdl2.getError()
let newSurface = createRGBSurface(0, w, h, bpp,
Rmask, Gmask, Bmask, Amask)
discard surface.setSurfaceAlphaMod(0xFF)
discard surface.setSurfaceBlendMode(BlendMode_None)
blitSurface(surface, nil, newSurface, nil)
var tex: cuint
glGenTextures(1, addr tex)
glBindTexture(GL_TEXTURE_2D, tex)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, 4, w.GLsizei, h.GLsizei, 0, GL_RGBA, GL_UNSIGNED_BYTE, newSurface.pixels)
Image rendering code:
glUseProgram(shaderProgram)
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslatef(0.0, 0.0, -7.0)
glColor3f(1.0, 1.0, 1.0)
glBegin(GL_QUADS)
glTexCoord2i(0, 0)
glVertex3f(-0.5, -1.9, 0.0)
glTexCoord2i(1, 0)
glVertex3f(0.5, -1.9, 0.0)
glTexCoord2i(0, 1)
glVertex3f(-0.5, -1.4, 0.0)
glTexCoord2i(1, 1)
glVertex3f(0.5, -1.4, 0.0)
glEnd()
If shaders have something to do with it, here's my vertex shader:
#version 330 core
in vec4 data;
out vec3 ourColor;
void main() {
gl_Position = vec4(data.x, data.y, data.z, 1.0);
ourColor = vec3(data.w, data.w, data.w);
}
And the fragment shader:
#version 330 core
precision highp float;
in vec3 ourColor;
out vec4 color;
void main() {
color = vec4(ourColor, 1.0);
}

You should check glGetError() to see if there are any OpenGL errors occurring. You should also check the surface.pixelFormat to see whether you actually have 4 bytes per pixel in your image. If it's just RGB data and not RGBA, then glTexImage2D() will read past the end of the image data and likely crash with a segmentation fault.

Related

Problem at Shadows Calculation with ShadowMap Rendering

I'm having a little trouble on implementing Shadow Mapping in the Engine I'm doing. I'm following the LearnOpenGL's tutorial to do so, and more or less it "works", but there's something I'm doing wrong, like if something in the shadowmap was reverted or something, check the next gifs: gif1, gif2
In those gifs, there is a simple scene with a directional light (which has an orthogonal frustum to make the shadow calculations and to ease my life), which has to cast shadows. Then, at the right there is a little window showing the "shadow map scene", the scene rendered from light's point of view only with depth values.
Now, about the code, it pretty follows the guidelines from the mentioned tutorial. I have a ModuleRenderer and I first create the framebuffers with the textures they have to have:
glGenTextures(1, &depthMapTexture);
glBindTexture(GL_TEXTURE_2D, depthMapTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, App->window->GetWindowWidth(), App->window->GetWindowHeight(), 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glGenFramebuffers(1, &depthbufferFBO);
glBindFramebuffer(GL_FRAMEBUFFER, depthbufferFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthMapTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then, in the Module Renderer's post update, I make the 2 render passes and draw the FBOs:
// --- Shadows Buffer (Render 1st Pass) ---
glBindFramebuffer(GL_FRAMEBUFFER, depthbufferFBO);
SendShaderUniforms(shadowsShader->ID, true);
DrawRenderMeshes(true);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// --- Standard Buffer (Render 2nd Pass) ---
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
SendShaderUniforms(defaultShader->ID, false);
DrawRenderMeshes(false);
// --- Draw Lights ---
std::vector<ComponentLight*>::iterator LightIterator = m_LightsVec.begin();
for (; LightIterator != m_LightsVec.end(); ++LightIterator)
(*LightIterator)->Draw();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// -- Draw framebuffer textures ---
DrawFramebuffer(depth_quadVAO, depthMapTexture, true);
DrawFramebuffer(quadVAO, rendertexture, false);
The DrawRenderMeshes() function, basically gets the list of meshes to draw, the shader it has to pick, and sends all needed uniforms. It's a huge function to put here, but for a normal mesh, it gets a shader called Standard and sends all it needs. For the shadow map, it sends the texture attached to the depth FBO:
glUniform1i(glGetUniformLocation(shader, "u_ShadowMap"), 4);
glActiveTexture(GL_TEXTURE0 + 4);
glBindTexture(GL_TEXTURE_2D, depthMapTexture);
In the standard shader, for the vertex, I just pass the uniform for lightspace (the light's frustum projection x view matrices) to calculate the fragment position in light space (the next is done in the vertex's main):
v_FragPos = vec3(u_Model * vec4(a_Position, 1.0));
v_FragPos_InLightSpace = u_LightSpace * vec4(v_FragPos, 1.0);
v_FragPos_InLightSpace.z = (1.0 - v_FragPos_InLightSpace.z);
gl_Position = u_Proj * u_View * vec4(v_FragPos, 1.0);
And for the fragment, I calculate, with that value, the fragment's shadowing (the diffuse+specular values of light are multiplied by the result of that shadowing function):
float ShadowCalculation()
{
vec3 projCoords = v_FragPos_InLightSpace.xyz / v_FragPos_InLightSpace.w;
projCoords = projCoords * 0.5 + 0.5;
float closeDepth = texture(u_ShadowMap, projCoords.xy).z;
float currDept = projCoords.z;
float shadow = currDept > closeDepth ? 1.0 : 0.0;
return (1.0 - shadow);
}
Again, I'm not sure what can be wrong, but I can guess that something is kind of inverted? Not sure... If anyone can imagine something and let me know, I would appreciate a lot, thanks you :)
Note: For the first render pass, in which all scene is rendered only with depth values, I use a very simple shader that just puts objects in their position with the common function (in the vertex shader):
gl_Position = u_Proj * u_View * u_Model * vec4(a_Position, 1.0);
And the fragment doesn't do anything, is an empty main(), since it's the same than doing what we want for shadows pass
gl_FragDepth = gl_FragCoord.z;

GL_INVALID_OPERATION when attempting to sample cubemap texture

I'm working on shadow casting using this lovely tutorial. The process is, we render the scene to a frame buffer, attached to which is a cubemap to hold the depth values. Then, we pass this cubemap to a fragment shader which samples it and gets the depth values from there.
I took a slight deviation from the tutorial in that instead of using a geometry shader to render the entire cubemap at once, I instead render the scene six times to get the same effect - largely because my current shader system doesn't support geometry shaders and for now I'm not too concerned about the performance hit.
The depth cubemap is being drawn to fine, here's a screenshot from gDEBugger:
Everything seems to be in order here.
However, I'm having issues in my fragment shader when I attempt to sample this cubemap. After the call to glDrawArrays, a call to glGetError returns GL_INVALID_OPERATION, and as best as I can tell, it's coming from here: (The offending line has been commented)
struct PointLight
{
vec3 Position;
float ConstantRolloff;
float LinearRolloff;
float QuadraticRolloff;
vec4 Color;
samplerCube DepthMap;
float FarPlane;
};
uniform PointLight PointLights[NUM_POINT_LIGHTS];
[...]
float CalculateShadow(int lindex)
{
// Calculate vector between fragment and light
vec3 fragToLight = FragPos - PointLights[lindex].Position;
// Sample from the depth map (Comment this out and everything works fine!)
float closestDepth = texture(PointLights[lindex].DepthMap, vec3(1.0, 1.0, 1.0)).r;
// Transform to original value
closestDepth *= PointLights[lindex].FarPlane;
// Get current depth
float currDepth = length(fragToLight);
// Test for shadow
float bias = 0.05;
float shadow = currDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
Commenting out the aforementioned line seems to make everything work fine - so I'm assuming it's the call to the texture sampler that's causing issues. I saw that this can be attributed to using two textures of different types in the same texture unit - but according to gDEBugger this isn't the case:
Texture 16 is the depth cube map.
In case it's relevant, here's how I'm setting up the FBO: (called only once)
// Generate frame buffer
glGenFramebuffers(1, &depthMapFrameBuffer);
// Generate depth maps
glGenTextures(1, &depthMap);
// Set up textures
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, depthMap);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
ShadowmapSize, ShadowmapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// Set texture parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
// Attach cubemap to FBO
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthMap, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
ERROR_LOG("PointLight created an incomplete frame buffer!\n");
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Here's how I'm drawing with it: (called every frame)
// Set up viewport
glViewport(0, 0, ShadowmapSize, ShadowmapSize);
// Bind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
// Clear depth buffer
glClear(GL_DEPTH_BUFFER_BIT);
// Render scene
for(int i = 0; i < 6; ++i)
{
sh->SetUniform("ShadowMatrix", lightSpaceTransforms[i]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, depthMap, 0);
Space()->Get<Renderer>()->RenderScene(sh);
}
// Unbind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
And here's how I'm binding it before drawing:
std::stringstream ssD;
ssD << "PointLights[" << i << "].DepthMap";
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap()); // just returns the ID of the light's depth map
shader->SetUniform(ssD.str().c_str(), i + 4); // just a wrapper around glSetUniform1i
Thank you for reading, and please let me know if I can supply more information!
It is old post, but i think it may be useful for other people from the search.
Your problem here:
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
This replacement should fix problem:
glActiveTexture(GL_TEXTURE4 + i);
glUniform1i(glGetUniformLocation("programId", "cubMapUniformName"), GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
It set texture unit number for shader sampler

PBO Indexed Color Texture Rendering with Palette in Fragment Shader not working

I am working on a game with 8bit graphics. I provide a Pixelbuffer (OSXRenderer.pbo)
to my gameloop to fill it up. Then texsubimage it onto a texture (OSXRenderer.ScreenTexture).
The texture is rendered to the screen via a quad.
I got it working without problems with a RGB PBO (size: width*height*3).
But now i want the pbo to be indexed color. So i load a palette into another texture
(OSXRenderer.PaletteTexture) and changed my PBO. (size: width*height).
How i figure it should work is:
PBO gets filled with noise (random uint8 0-63), Screentexture gets texsubimaged,
and when rendering it onto the screen via quad, my fragmentshader replaces all the
RED channel values with the corresponding colors from my palette and i get 8bit noise on the screen.
But i simply can't get it to work. I only get a black screen. If I set my fragcolor to the incoming
screentexture(pbo) data i get red noise. Just as expected.
[EDIT]
I tested the fragment-shaders "color"-variable values. And they are always 0.0 except alpha which is always 1.0
setup:
static uint8 palette[] = {
0x80,0x80,0x80, 0x00,0x00,0xBB, 0x37,0x00,0xBF, 0x84,0x00,0xA6,
0xBB,0x00,0x6A, 0xB7,0x00,0x1E, 0xB3,0x00,0x00, 0x91,0x26,0x00,
0x7B,0x2B,0x00, 0x00,0x3E,0x00, 0x00,0x48,0x0D, 0x00,0x3C,0x22,
0x00,0x2F,0x66, 0x00,0x00,0x00, 0x05,0x05,0x05, 0x05,0x05,0x05,
0xC8,0xC8,0xC8, 0x00,0x59,0xFF, 0x44,0x3C,0xFF, 0xB7,0x33,0xCC,
0xFF,0x33,0xAA, 0xFF,0x37,0x5E, 0xFF,0x37,0x1A, 0xD5,0x4B,0x00,
0xC4,0x62,0x00, 0x3C,0x7B,0x00, 0x1E,0x84,0x15, 0x00,0x95,0x66,
0x00,0x84,0xC4, 0x11,0x11,0x11, 0x09,0x09,0x09, 0x09,0x09,0x09,
0xFF,0xFF,0xFF, 0x00,0x95,0xFF, 0x6F,0x84,0xFF, 0xD5,0x6F,0xFF,
0xFF,0x77,0xCC, 0xFF,0x6F,0x99, 0xFF,0x7B,0x59, 0xFF,0x91,0x5F,
0xFF,0xA2,0x33, 0xA6,0xBF,0x00, 0x51,0xD9,0x6A, 0x4D,0xD5,0xAE,
0x00,0xD9,0xFF, 0x66,0x66,0x66, 0x0D,0x0D,0x0D, 0x0D,0x0D,0x0D,
0xFF,0xFF,0xFF, 0x84,0xBF,0xFF, 0xBB,0xBB,0xFF, 0xD0,0xBB,0xFF,
0xFF,0xBF,0xEA, 0xFF,0xBF,0xCC, 0xFF,0xC4,0xB7, 0xFF,0xCC,0xAE,
0xFF,0xD9,0xA2, 0xCC,0xE1,0x99, 0xAE,0xEE,0xB7, 0xAA,0xF7,0xEE,
0xB3,0xEE,0xFF, 0xDD,0xDD,0xDD, 0x11,0x11,0x11, 0x11,0x11,0x11
};
/* Create the PBO */
glGenBuffers(1, &OSXRenderer.pbo);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, OSXRenderer.pbo);
glBufferData(GL_PIXEL_UNPACK_BUFFER, W*H, NULL, GL_STREAM_DRAW);
/* Create the Screen Texture (400*240 pixel) */
glGenTextures(1, &OSXRenderer.ScreenTexture);
glBindTexture(GL_TEXTURE_2D, OSXRenderer.ScreenTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, W, H, 0,
GL_RED, GL_UNSIGNED_BYTE, OSXRenderer.Pixelbuffer.Data);
/* Create the Palette Texture (64*1 pixel) */
glGenTextures(1, &OSXRenderer.PaletteTexture);
glBindTexture(GL_TEXTURE_2D, OSXRenderer.PaletteTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 64, 1, 0,
GL_RGB, GL_UNSIGNED_BYTE, &palette);
/* Compile and Link Shaders */
OSXRenderer.Program = OSXCreateProgram();
glUseProgram(OSXRenderer.Program);
/* Get the uniforms for the screen- and the palette-texture */
OSXRenderer.UniformTex = glGetUniformLocation(OSXRenderer.Program, "tex");
OSXRenderer.UniformPal = glGetUniformLocation(OSXRenderer.Program, "pal");
update loop:
/* Rendering Prerequesites */
glUseProgram(OSXRenderer.Program);
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, OSXRenderer.PaletteTexture);
glUniform1i(OSXRenderer.UniformPal, 0);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, OSXRenderer.ScreenTexture);
glUniform1i(OSXRenderer.UniformTex, 1);
/* Bind the PBO */
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, OSXRenderer.pbo);
glBufferData(GL_PIXEL_UNPACK_BUFFER, W*H, NULL, GL_STREAM_DRAW);
OSXRenderer.Pixelbuffer.Data = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
//
//
FillPixelBuffer();
//
//
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER);
glBindTexture(GL_TEXTURE_2D, OSXRenderer.ScreenTexture);
/* Bind the screentexture again just to be save
and fill it with the PBO data */
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, W, H, GL_RED, GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
/* Render it to the screen */
glBegin(GL_QUADS);
glTexCoord2f(0.0f,1.0f);
glVertex2f(-1.0f,1.0f);
glTexCoord2f(1.0f,1.0f);
glVertex2f(1.0f,1.0f);
glTexCoord2f(1.0f,0.0f);
glVertex2f(1.0f,-1.0f);
glTexCoord2f(0.0f,0.0f);
glVertex2f(-1.0f,-1.0f);
glEnd();
/* glFlush() */
CGLFlushDrawable();
vertexshader:
# version 120
varying vec2 texcoord;
// Simple Passthrough
void main(void)
{
gl_Position = ftransform();
texcoord = gl_MultiTexCoord0.xy;
}
fragmentshader:
# version 120
uniform sampler2D tex;
uniform sampler2D pal;
varying vec2 texcoord;
void main(void)
{
// Get the color values of the screen-texture. I only want the RED channel
vec4 index = texture2D(tex, texcoord);
// Get the color values of the palette texture
// using the screen-texture's RED channel as an index
//[EDIT] First post multiplied index.r with 255 here.
vec4 color = texture2D(pal, vec2(index.r, 0));
// Use it
gl_FragColor = color;
}

openGL fragment shader and the original texel data

So I've recently been learning some openGL. I've initially been using the SDL library to print images on screen but I figured it would be interested to try and achieve something similar with openGL and thus also being able to apply shaders to my images for neat effects such as lighting effects and night/day cycles and such. What I'm doing right now is simply loading a texture, then applying that texture to a quad with the same size of the texture. This works well.
Now I want to apply some shaders. This is an example of a vertex and fragment shader that I could apply to one of my textured quads:
in vec2 LVertexPos2D;
void main()
{
gl_Position = vec4( LVertexPos2D.x, LVertexPos2D.y, 0, 1);
}
which does nothing, then my fragment shader:
out vec4 LFragment;
void main()
{
LFragment = vec4(1.0, 1.0, 1.0, 1.0);
}
Which obviously just turns the texture I'm applying it on into a white block, which isn't exactly what I want. Somehow I need to retrieve the current texel data so I can modify that instead of simply changing it.
I've read that the function call to texture2D is supposed to return a vec4 of the current pixel data but I haven't gotten this to work. (Having a hard time finding a good explanation of the function inputs and how it works). Furthermore texture2D is supposedly deprecated but I can't get its replacement (texture()) to work either. Any nudges in the right direction would be greatly appreciated!
Edit: I'll throw in some more info on how I'm doing things, this is the function that loads my textures:
texture makeTexture(std::string fileLocation)
{
texture tempTexture;
SDL_Surface *mySurface = IMG_Load(fileLocation.c_str());
if (mySurface == NULL)
{
std::cout << "Error in loading image at: " << fileLocation << std::endl;
return tempTexture;
}
GLuint myTexture;
glGenTextures(1, &myTexture);
glBindTexture(GL_TEXTURE_2D, myTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mySurface->w, mySurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, mySurface->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
SDL_FreeSurface(mySurface);
tempTexture.texture_id = myTexture;
tempTexture.h = mySurface->h;
tempTexture.w = mySurface->w;
return tempTexture;
}
Where this is my texture struct:
struct texture
{
int w;
int h;
GLuint texture_id;
};
and this function draws any texture to a given x and y coordinate:
void draw(int y, int x, texture &tempTexture)
{
glBindTexture(GL_TEXTURE_2D, tempTexture.texture_id);
glBegin(GL_QUADS);
glTexCoord2f(0, 1);
glVertex2f(-1 + ((float)(x) / SCREEN_WIDTH) * 2, 1 - ((float)(y + tempTexture.h) / SCREEN_HEIGHT) * 2); //Bottom left
glTexCoord2f(1, 1);
glVertex2f(-1 + ((float)(x + tempTexture.w)/SCREEN_WIDTH)*2, 1 - ((float)(y + tempTexture.h) / SCREEN_HEIGHT) * 2); //Bottom right?
glTexCoord2f(1, 0);
glVertex2f(-1 + ((float)(x + tempTexture.w) / SCREEN_WIDTH) * 2, 1.0 - ((float)y / SCREEN_HEIGHT) * 2); //top right
glTexCoord2f(0, 0);
glVertex2f(-1 + ((float)(x) / SCREEN_WIDTH) * 2, 1.0 - ((float)y / SCREEN_HEIGHT) * 2); //Top left (notification: Coordinates are (x,y), not (y,x).
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
}
then in my main render function I'm now doing:
draw(0, 0, myTexture);
glUseProgram(gProgramID);
glUniform1i(baseImageLoc, myTexture2.texture_id);
draw(100, 100, myTexture2);
glUseProgram(NULL);
where myTexture is just a meadow of grass and myTexture2 is a player character that I want to apply some shading shenanigans to. gPriogramID is a program that has my two aformentioned shaders loaded to it.
In order to access texture data in a shader you have to do the following:
First you need to glBind your texture to a specific texture unit (change active texture unit using glActiveTexture.
Pass the texture unit index as a uniform sampler to the shader.
Access the texture in the shader like the following.
// tex holds the value of the texture unit to be used (not the texture)
uniform sampler2D tex;
void main()
{
vec4 color = texture(tex,texCoord);
LFragment = color;
}
You also need to pass texCoords to the shader as in vertex attribute.

Multiple textures in GLSL - only one works

My problem is getting more than one texture accessible in a GLSL shader.
Here's what I'm doing:
Shader:
uniform sampler2D sampler0;
uniform sampler2D sampler1;
uniform float blend;
void main( void )
{
vec2 coords = gl_TexCoord[0];
vec4 col = texture2D(sampler0, coords);
vec4 col2 = texture2D(sampler1, coords);
if (blend > 0.5){
gl_FragColor = col;
} else {
gl_FragColor = col2;
}
};
So, I simply choose between the two color values based on a uniform variable. Simple enough (this is a test), but instead of the expected behavior, I get all black when blend <= 0.5.
OpenGL code:
m_sampler0location = m_shader.FindUniform("sampler0");
m_sampler1location = m_shader.FindUniform("sampler1");
m_blendlocation = m_shader.FindUniform("blend");
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
m_extensions.glUniform1iARB(m_sampler0location, 0);
glBindTexture(GL_TEXTURE_2D, Texture0.Handle);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
m_extensions.glUniform1iARB(m_sampler1location, 1);
glBindTexture(GL_TEXTURE_2D, Texture1.Handle);
glBegin(GL_QUADS);
//lower left
glTexCoord2f(0, 0);
glVertex2f(-1.0, -1.0);
//upper left
glTexCoord2f(0, maxCoords0.t);
glVertex2f(-1.0, 1.0);
//upper right
glTexCoord2f(maxCoords0.s, maxCoords0.t);
glVertex2f(1.0, 1.0);
//lower right
glTexCoord2f(maxCoords0.s, 0);
glVertex2f(1.0, -1.0);
glEnd()
The shader is compiled and bound before all this. All the sanity checks in that process indicate that it goes ok.
As I said, the value of col in the shader program reflects fragments from a texture; the value of col2 is black. The texture that is displayed is the last active texture - if I change the last glBindTexture to bind Texture0.Handle, the texture changes. Fixed according to Bahbar's reply.
As it is, the scene renders all black, even if I add something like gl_FragColor.r = blend; as the last line of the shader. But, if I comment out the call glActiveTexture(GL_TEXTURE1);, the shader works again, and the same texture appears in both sampler0 and sampler1.
What's going on? The line in question, glActiveTexture(GL_TEXTURE1);, seems to work just fine, as evidenced by a subsequent glGetIntegerv(GL_ACTIVE_TEXTURE, &anint). Why does it break everything so horribly? I've already tried upgrading my display drivers.
Here's a basic GLUT example (written on OS X, adapt as needed) that generates two checkerboard textures, loads a shader with two samplers and combines them by tinting each (one red, one blue) and blending. See if this works for you:
#include <stdio.h>
#include <stdlib.h>
#include <GLUT/glut.h>
#include <OpenGL/gl.h>
#include <OpenGL/glu.h>
#define kTextureDim 64
GLuint t1;
GLuint t2;
/* adapted from the red book */
GLuint makeCheckTex() {
GLubyte image[kTextureDim][kTextureDim][4]; // RGBA storage
for (int i = 0; i < kTextureDim; i++) {
for (int j = 0; j < kTextureDim; j++) {
int c = ((((i & 0x8) == 0) ^ ((j & 0x8)) == 0))*255;
image[i][j][0] = (GLubyte)c;
image[i][j][1] = (GLubyte)c;
image[i][j][2] = (GLubyte)c;
image[i][j][3] = (GLubyte)255;
}
}
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, kTextureDim, kTextureDim, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
return texName;
}
void loadShader() {
#define STRINGIFY(A) #A
const GLchar* source = STRINGIFY(
uniform sampler2D tex1;
uniform sampler2D tex2;
void main() {
vec4 s1 = texture2D(tex1, gl_TexCoord[0].st);
vec4 s2 = texture2D(tex2, gl_TexCoord[0].st + vec2(0.0625, 0.0625));
gl_FragColor = mix(vec4(1, s1.g, s1.b, 0.5), vec4(s2.r, s2.g, 1, 0.5), 0.5);
}
);
GLuint program = glCreateProgram();
GLuint shader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(shader, 1, &source, NULL);
glCompileShader(shader);
GLint logLength;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0) {
GLchar* log = (GLchar*)malloc(logLength);
glGetShaderInfoLog(shader, logLength, &logLength, log);
printf("Shader compile log:\n%s\n", log);
free(log);
}
glAttachShader(program, shader);
glLinkProgram(program);
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0) {
GLchar* log = (GLchar*)malloc(logLength);
glGetProgramInfoLog(program, logLength, &logLength, log);
printf("Program link log:\n%s\n", log);
free(log);
}
GLuint t1Location = glGetUniformLocation(program, "tex1");
GLuint t2Location = glGetUniformLocation(program, "tex2");
glUniform1i(t1Location, 0);
glUniform1i(t2Location, 1);
glUseProgram(program);
}
void init()
{
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_FLAT);
t1 = makeCheckTex();
t2 = makeCheckTex();
loadShader();
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, t1);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, t2);
glBegin(GL_QUADS);
//lower left
glTexCoord2f(0, 0);
glVertex2f(-1.0, -1.0);
//upper left
glTexCoord2f(0, 1.0);
glVertex2f(-1.0, 1.0);
//upper right
glTexCoord2f(1.0, 1.0);
glVertex2f(1.0, 1.0);
//lower right
glTexCoord2f(1.0, 0);
glVertex2f(1.0, -1.0);
glEnd();
glutSwapBuffers();
}
void reshape(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-2, 2, -2, 2, -2, 2);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH | GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitWindowPosition(0, 0);
glutCreateWindow("GLSL Texture Blending");
glutReshapeFunc(reshape);
glutDisplayFunc(display);
glutIdleFunc(display);
init();
glutMainLoop();
return 0;
}
Hopefully the result will look something like this (you can comment out the glUseProgram call to see the first texture drawn without the shader):
Just to help other people who might be interested in using multiple textures and will feel hopeless after days searching for an answer. I found that you need to call
glUseProgram(program);
GLuint t1Location = glGetUniformLocation(program, "tex1");
GLuint t2Location = glGetUniformLocation(program, "tex2");
glUniform1i(t1Location, 0);
glUniform1i(t2Location, 1);
In that order for it to work (glUseProgram is the last instruction for 99% of the sample code I've found online). Now it may be only the case for me and not affect anyone else on Earth, but just in case I thought I'd share.
Quite late reply, but for anybody encountering this - I encountered same problem and after short fiddling, I realized that calling glActiveTexture(GL_TEXTURE0) fixes the issue.
It seems something down the line gets confused if active texture unit is not 'reset' to zero.
This could be system-dependent behavior; I'm running 64bit Archlinux with Mesa 8.0.4.
When compiling your shader to test, I found two errors:
coords should be assigned the st portion of the 4-component gl_TexCoord, e.g.
vec2 coords = gl_TexCoord[0].st;
The shader should not end with a semicolon.
Are you checking anywhere in your main program that shader compiles correctly? You may want to look at GL_COMPILE_STATUS via glGetShader and glGetShaderInfoLog.
It sounds like your glActiveTexture call does not work. Are you sure you set up the function pointer correctly ?
Verify by calling glGetIntegerv(GL_ACTIVE_TEXTURE, &anint) after having called your glActiveTexture(GL_TEXTURE1).
Also glEnable(GL_TEXTURE_2D) are not useful. The shader itself specifies what texture units to use, and which target of each unit to "enable".
Edit to add:
Well, your new situation is peculiar. The fact that you can't even show red is weird, in particular (well, did you set alpha to 1 just to make sure ?).
That said, you should restore GL_TEXTURE0 as the glActiveTexture after you're done setting the texture unit 1 (i.e. after the glBindTexture call).