glDeleteTextures causes segfault - c++

I have this C++ console application that creates a window and initializes an OpenGL context.
IDE: CodeBocks
Compilator: MinGW (x32)
OS: Windows 8.1 64bit
So when I compile I have a console and the window with the context. When I close the window first and then the console everything is okay.
However if I close the console first I get a segmentation fault from glDeleteTextures in the virtual destructor.
Here is how I initialize the texture:
texture::texture(const string& fileName)
{
int width, height, numComponents;
unsigned char* imageData = stbi_load(fileName.c_str(), &width, &height, &numComponents, 4);
if(imageData == NULL)
cout << "Could not open " << fileName << "." << endl;
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glBindTexture(GL_TEXTURE, 0);
stbi_image_free(imageData);
}
void texture::useTexture(unsigned int textureUnit)
{
if(textureUnit < 0 || textureUnit > 32)
{
cout << "Texture unit not between 0 and 32. Setting it to 0..." << endl;
textureUnit = 0;
}
glActiveTexture(GL_TEXTURE0 + textureUnit);
glBindTexture(GL_TEXTURE_2D, m_texture);
}
texture::~texture()
{
glDeleteTextures(1, &m_texture);
}
Keep in mind that I wrote the same code, but compiled it with 64bit mingw and used 64bit glfw and it worked just fine... if that helps in any way.

Related

OpenGL 4.1 and lower Black Texture, Mac and Windows

I had this problem when compiling my OpenGL code on lower-end PC's that don't support OpenGL 4.5 and macs. In my regular code, I would use functions like glCreateTextures and glTextureStorage2D, but they are not supported in other versions so I went the other glGenTextures path.
Here's the image generation code:
Texture::Texture(const std::string& path)
: m_Path(path)
{
int width, height, channels;
stbi_set_flip_vertically_on_load(1);
unsigned char* data = stbi_load(path.c_str(), &width, &height, &channels, 0);
RW_CORE_ASSERT(data, "Failed to load image!");
m_Width = width;
m_Height = height;
GLenum internalFormat = 0, dataFormat = 0;
if (channels == 4)
{
internalFormat = GL_RGBA8;
dataFormat = GL_RGBA;
}
else if (channels == 3)
{
internalFormat = GL_RGB8;
dataFormat = GL_RGB;
}
m_InternalFormat = internalFormat;
m_DataFormat = dataFormat;
RW_CORE_ASSERT(internalFormat & dataFormat, "Format not supported!");
glGenTextures(1, &m_ID);
glBindTexture(GL_TEXTURE_2D, m_ID);
glTexParameteri(m_ID, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(m_ID, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(m_ID, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(m_ID, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(m_ID, 1, internalFormat, m_Width, m_Height, 0, dataFormat, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(data);
}
And I want to bind my textures to specific slots on the GPU so I have this function:
void Texture::Bind(uint32_t slot) const
{
glActiveTexture(GL_TEXTURE0 + slot);
glBindTexture(GL_TEXTURE_2D, m_ID);
}
Here's the screenshot of what gets drawn:
To make sure it wasn't a rendering problem I decided to put it into ImGui's renderer.
And here's the picture I am supposed to get:
And image imports correctly, I get no errors and same importing code and paths work on higher-end PC and the only thing that's changed is that the higher-end PC has OpenGL 4.5 texture generation code.
It turns out I had to specify GL_TEXTURE_2D in these places instead of texture ID:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 1, internalFormat, m_Width, m_Height, 0, dataFormat, GL_UNSIGNED_BYTE, data);

bmp texture not loading correctly using Open GL

Hello everyone I'm trying to load a texture (normal map) using OpenGL.
GLuint loadTexture(const char* fileName) {
GLuint textureID;
glGenTextures(1, &textureID);
// load file - using core SDL library
SDL_Surface* tmpSurface;
tmpSurface = SDL_LoadBMP(fileName);
if (tmpSurface == nullptr) {
std::cout << "Error loading bitmap" << std::endl;
}
// bind texture and set parameters
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tmpSurface->w, tmpSurface->h, 0,
GL_BGR, GL_UNSIGNED_BYTE, tmpSurface->pixels);
glGenerateMipmap(GL_TEXTURE_2D);
SDL_FreeSurface(tmpSurface);
return textureID;
}
then if I try to render it, it gives me that :
Instead of :
But I can render this normaly :
Do you have an idea ? color deep is 32 for the one that is not working and 24 for the working one
Use SDL_ConvertSurfaceFormat to convert the surface format and load the converted curface:
SDL_Surface* tmpSurface = SDL_LoadBMP(fileName);
SDL_Surface* formattedSurface = SDL_ConvertSurfaceFormat(
tmpSurface, SDL_PIXELFORMAT_ABGR8888, 0);
SDL_FreeSurface(tmpSurface);
// [...]
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tmpSurface->w, tmpSurface->h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, formattedSurface->pixels);
SDL_FreeSurface(formattedSurface);
Of course you can evaluate the format attribute of the SDL_Surface and set a proper format attribute when two-dimensional texture image is specified by glTexImage2D.

How to properly load an image to use as an OpenGL texture?

I am trying to load an image into an OpenGL texture using SOIL2; however, it never seems to be correct unless I use SOIL2's load to texture function. I have tried using STB image and Devil, but both get similar results.
Code:
GLuint load_image(const std::string& path) {
int iwidth, iheight, channels;
unsigned char* image = SOIL_load_image(path.c_str(), &iwidth, &iheight, &channels, SOIL_LOAD_RGBA);
// std::cout << SOIL_last_result() << std::endl;
// float* image = stbi_loadf(path.c_str(), &iwidth, &iheight, &channels, STBI_rgb_alpha);
// if(!ilLoadImage(path.c_str()))
// std::cout << "Devil Failed to load image: " << iluErrorString(ilGetError()) << std::endl;
//
// unsigned char* image = ilGetData();
//
// int iwidth = ilGetInteger(IL_IMAGE_WIDTH);
// int iheight = ilGetInteger(IL_IMAGE_HEIGHT);
// int channels = ilGetInteger(IL_IMAGE_CHANNELS);
GLuint texture;
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0 + texture);
GLint old_unpack_alignment;
glGetIntegerv(GL_UNPACK_ALIGNMENT, &old_unpack_alignment);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, texture);
glCheckError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glCheckError();
GLenum original_format = (channels == 4 ? GL_RGBA : GL_RGB);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D);
glPixelStorei(GL_UNPACK_ALIGNMENT, old_unpack_alignment);
return texture;
}
Screenshot:
What I should get:
I would like to know how to properly load an image into a texture.
Here is an example of what my texture loading function looks like:
unsigned int loadTexture(char const * path)
{
unsigned int textureID;
glGenTextures(1, &textureID);
int width, height, nrComponents;
unsigned char *data = stbi_load(path, &width, &height, &nrComponents, 0);
if (data)
{
GLenum format;
if (nrComponents == 1)
format = GL_RED;
else if (nrComponents == 3)
format = GL_RGB;
else if (nrComponents == 4)
format = GL_RGBA;
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, format == GL_RGBA ? GL_CLAMP_TO_EDGE : GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, format == GL_RGBA ? GL_CLAMP_TO_EDGE : GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
stbi_image_free(data);
}
else
{
std::cout << "Texture failed to load at path: " << path << std::endl;
stbi_image_free(data);
}
return textureID
}
I will usually set up my VAO & VBO before hand, then I'll use this to load in a texture. After this I'll configure my shader(s) for use, then within the render loop is where I'll use my shader, set the matrices passing in any of the needed uniforms, then after all the "model" information is completed I'll finally bind the VertexArrays, set the approrpriate texture to Active, then Bind those Texture(s) and fish up with drawing the arrays or primitives.

GL_INVALID_OPERATION in glGenerateMipmap(incomplete cube map)

I'm trying to learn OpenGL and i'm using SOIL to load images.
I have the following piece of code:
GLuint texID = 0;
bool loadCubeMap(const char * baseFileName) {
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_CUBE_MAP, texID);
const char * suffixes[] = { "posx", "negx", "posy", "negy", "posz", "negz" };
GLuint targets[] = {
GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X,
GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
};
for (int i = 0; i < 6; i++) {
int width, height;
std::string fileName = std::string(baseFileName) + "_" + suffixes[i] + ".png";
std::cout << "Loading: " << fileName << std::endl;
unsigned char * image = SOIL_load_image(fileName.c_str(), &width, &height, 0, SOIL_LOAD_RGB);
if (!image) {
std::cerr << __FUNCTION__ << " cannot load image " << fileName << " (" << SOIL_last_result() << ")" << std::endl;
return false;
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
}
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
return true;
}
When i call this, the images load successfully, but then i get an error in console:
---- OGL DEBUG ----
message <1>: 'API' reported 'Error' with 'High' severity:
GL_INVALID_OPERATION in glGenerateMipmap(incomplete cube map)
---- BACKTRACE ----
and no cubemap is displaying at all.
Do you see any mistake in this code?
You never actually specify the texture image for the cube map faces. You instead call glTexImage2D on the GL_TEXTURE_2D target for all of the cube faces.

Texture does not wrap where expected

I am generating a bitmap (1 byte per pixel) and attempting to use it for alpha blending. I am successfully using the bitmap, but it appears that the texture does not wrap lines as I expect.
When I use the following code, it wraps where I would expect, given the input image. I get the set of Xs that I expect.
std::ofstream file{ R"(FileName.txt)" };
file << "width: " << gs.width() << "\theight: " << gs.height() << "\n";
for (int i = 0; i < gs.height(); ++i)
{
for (int j = 0; j < gs.width(); ++j)
{
file << ((gs.alpha()[j + i * gs.width()]) ? 'X' : ' ');
}
file << "\n";
}
When I load the texture it appears that the width of the texture does not match gs.width(), since it wraps oddly.
This is the code that I use to create the texture and load it with the bitmap.
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.alpha());
Can anyone suggest what I might be doing wrong?