I have seen many code samples for loading textures for OpenGL, many of them a bit complicated to understand or requiring new functions with a lot of code.
I was thinking that as OpenCV allows us to load any image format it can be a simple an efficient way to load textures to OpenGL, but I am missing something. I have this piece of code in c++:
cv::Mat texture_cv;
GLuint texture[1];
int Status=FALSE;
if( texture_cv = imread("stones.jpg")) {
Status=TRUE; // Set The Status To TRUE
glGenTextures(1, &texture[0]); // Create The Texture
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S , GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glTexImage2D(GL_TEXTURE_2D, 0, 3, texture_cv.cols, texture_cv.rows, 0, GL_RGB, GL_UNSIGNED_BYTE, texture_cv.data);
}
And it is not compiling because of this error:
error C2451: conditional expression of type 'cv::Mat' is illegal
Any suggestions? How should I do the conversion from cv::Mat to openGL texture?
Your error appears there, right?
if( texture_cv = imread("stones.jpg")) {
because in if(expr) expr must be bool or can be casted to bool. But there is no way to convert cv::Mat into boolean implicitly. But you can check the result of imread like that:
texture_cv = imread("stones.jpg");
if (texture_cv.empty()) {
// handle was an error
} else {
// do right job
}
See: cv::Mat::empty(), cv::imread
Hope that helped you.
The assignment operator
texture_cv = imread("stones.jpg")
returns a cv::Mat that can't be used in a conditional expression. You should write something like
if((texture_cv = imread("stones.jpg")) != /* insert condition here */ ) {
//...
}
or
texture = imread("stone.jpg");
if(!texture.empty()) {
//...
}
from this doc, I suggest you to change your test:
texture_cv = imread("stones.jpg");
if (texture_cv.data != NULL) {
...
Another short question...
I think you may need to use
glTexImage2D(GL_TEXTURE_2D, 0, 3, texture_cv.cols, texture_cv.rows, 0, GL_RGB, GL_UNSIGNED_BYTE, texture_cv.ptr());
instead of
glTexImage2D(GL_TEXTURE_2D, 0, 3, texture_cv.cols, texture_cv.rows, 0, GL_RGB, GL_UNSIGNED_BYTE, texture_cv.data);
Related
I use a texture array to store texture atlases. For hardware which support OpenGL 4.2 I use the glTexStorage3D approach however I would like to use texture arrays pre 4.2 too.
I checked several other threads with the same problem like this or this. I tried to follow the solutions provided there however the texture array seems to be empty, no texture is visible during rendering.
My glTexStorage3D solution which works without any problem:
glTexStorage3D(GL_TEXTURE_2D_ARRAY,
1,
GL_R8,
2048, 2048,
100);
And the glTexImage3D which should be equivalent, however produces no display:
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0,
GL_R8,
2048, 2048, 100,
0,
GL_RED,
GL_UNSIGNED_BYTE,
0);
The texture data is uploaded to the specified index with the following snippet (atlas width and height are 2048 and depth is 1):
glBindTexture(GL_TEXTURE_2D_ARRAY, m_arrayTexture);
glTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0,
0, 0, m_nextTextureLevel,
atlas->width, atlas->height, atlas->depth,
GL_RED,
GL_UNSIGNED_BYTE,
atlas->data);
What am I missing here? Any help would be highly appreciated.
Edit:
Uploading the texture data to the array right away is not an option as new textures can be added to the array during execution.
Edit v2, solution
As usually the problem was something trivial which I overlooked. I dived into Nazar554's solution and tried to compare it to my code. The problem was that I accidentally set the texture parameters using the wrong constant, so the glTexParameteri calls were made with GL_TEXTURE_2D instead of GL_TEXTURE_2D_ARRAY. After changing these values everything worked like a charm.
You can take a look at my Texture.cpp I used in my project.
However I did not use glTexSubImage() in fallback case. Instead I uploaded the texture data immediately (you are passing a 0 to preallocate the buffer)
Functions that might be interesting to you: Texture::loadTexStorageInternal(const std::string& fileName) and
bool Texture::loadTexInternal(const std::string& fileName)
Here is one of them, it handles fallback when glTexStorage3D is unavailable. It is quite long because it tries to handle compressed formats/mipmaps.
bool Texture::loadTexInternal(const std::string& fileName)
{
gli::texture Texture = gli::load(fileName);
if(Texture.empty())
return 0;
const gli::gl GL(gli::gl::PROFILE_GL33);
const gli::gl::format Format = GL.translate(Texture.format(), Texture.swizzles());
GLenum Target = static_cast<GLenum>(GL.translate(Texture.target()));
Binder texBinder(*this, Target);
glTexParameteri(Target, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(Target, GL_TEXTURE_MAX_LEVEL, static_cast<GLint>(Texture.levels() - 1));
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_R, Format.Swizzles[0]);
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_G, Format.Swizzles[1]);
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_B, Format.Swizzles[2]);
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_A, Format.Swizzles[3]);
if(Texture.levels() >= 1)
glTexParameteri(Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
else
glTexParameteri(Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(Target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(Target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(Target, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
//glm::tvec3<GLsizei> const Extent(Texture.extent());
for(std::size_t Layer = 0; Layer < Texture.layers(); ++Layer)
for(std::size_t Level = 0; Level < Texture.levels(); ++Level)
for(std::size_t Face = 0; Face < Texture.faces(); ++Face)
{
GLsizei const LayerGL = static_cast<GLsizei>(Layer);
glm::tvec3<GLsizei> loopExtent(Texture.extent(Level));
Target = gli::is_target_cube(Texture.target())
? static_cast<GLenum>(static_cast<GLint>(GL_TEXTURE_CUBE_MAP_POSITIVE_X) + static_cast<GLint>(Face))
: Target;
switch(Texture.target())
{
case gli::TARGET_1D:
if(gli::is_compressed(Texture.format()))
glCompressedTexImage1D(
Target,
static_cast<GLint>(Level),
static_cast<GLenum>(static_cast<GLenum>(Format.Internal)),
0, loopExtent.x,
static_cast<GLsizei>(Texture.size(Level)),
Texture.data(Layer, Face, Level));
else
glTexImage1D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x,
0,
static_cast<GLenum>(Format.External), static_cast<GLenum>(Format.Type),
Texture.data(Layer, Face, Level));
break;
case gli::TARGET_1D_ARRAY:
case gli::TARGET_2D:
case gli::TARGET_CUBE:
if(gli::is_compressed(Texture.format()))
glCompressedTexImage2D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x,
Texture.target() == gli::TARGET_1D_ARRAY ? LayerGL : loopExtent.y,
0,
static_cast<GLsizei>(Texture.size(Level)),
Texture.data(Layer, Face, Level));
else
glTexImage2D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x,
Texture.target() == gli::TARGET_1D_ARRAY ? LayerGL : loopExtent.y,
0,
static_cast<GLenum>(Format.External), static_cast<GLenum>(Format.Type),
Texture.data(Layer, Face, Level));
break;
case gli::TARGET_2D_ARRAY:
case gli::TARGET_3D:
case gli::TARGET_CUBE_ARRAY:
if(gli::is_compressed(Texture.format()))
glCompressedTexImage3D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x, loopExtent.y,
Texture.target() == gli::TARGET_3D ? loopExtent.z : LayerGL,
0,
static_cast<GLsizei>(Texture.size(Level)),
Texture.data(Layer, Face, Level));
else
glTexImage3D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x, loopExtent.y,
Texture.target() == gli::TARGET_3D ? loopExtent.z : LayerGL,
0,
static_cast<GLenum>(Format.External), static_cast<GLenum>(Format.Type),
Texture.data(Layer, Face, Level));
break;
default:
return false;
}
}
return true;
}
I'm trying to convert working openGL code to openGL ES. After some digging, I've concluded the following function doesn't work in ES because converting between format and internalFormat isn't supported (i.e. the source and destination formats need to be the same). The easiest fix seems to be converting the alpha data to rgba where r=g=b=0 which is what openGL was doing before under the surface. My attached fix doesn't seem to work though, because
I don't think I am understanding how the buffer is formatted to make that conversion manually. Also maybe there is an openGL ES function I can call that will make this copy for me. Not sure if it matters but the file is a TGA file.
void foo( unsigned char *inBytes,
unsigned int inWidth,
unsigned int inHeight ) {
int error;
GLenum internalTexFormat = GL_RGBA;
GLenum texDataFormat = GL_ALPHA;
if( myAttemptedFix ) {
texDataFormat = GL_RGBA;
unsigned char rgbaBytes[inWidth * inHeight * 4];
for(int i=0; i < inWidth * inHeight; i++) {
rgbaBytes[4*i] = 0;
rgbaBytes[4*i + 1] = 0;
rgbaBytes[4*i + 2] = 0;
rgbaBytes[4*i + 3] = inBytes[i];
}
inBytes = &rgbaBytes[0];
}
glBindTexture( GL_TEXTURE_2D, mTextureID );
error = glGetError();
if( error != GL_NO_ERROR ) { // error
printf( "Error binding to texture id %d, error = %d\n",
(int)mTextureID,
error );
}
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
if( mRepeat ) {
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
}
else {
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
}
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glTexImage2D( GL_TEXTURE_2D, 0,
internalTexFormat, inWidth,
inHeight, 0,
texDataFormat, GL_UNSIGNED_BYTE, inBytes );
error = glGetError();
if( error != GL_NO_ERROR ) { // error
printf( "Error setting texture data for id %d, error = %d, \"%s\"\n",
(int)mTextureID, error, glGetString( error ) );
}
}
Edit: When I run my fix it outlines the sprite correctly but also puts a lot of junk at the bottom that kind of looks like braille:
This looks more like a C++ problem. I believe your corrupted data is caused by this (shortened) code structure:
if (myAttemptedFix) {
unsigned char rgbaBytes[inWidth * inHeight * 4];
inBytes = &rgbaBytes[0];
}
The scope of rgbaBytes is the body of the if-statement. So the memory reserved for the array becomes invalid after the closing brace, and its content becomes undefined beyond that point. But you make your inBytes variable point at this memory, and use it after rgbaBytes has gone out of scope.
Since inBytes then points at unreserved memory, it's very likely that the memory is occupied by other variables in the code between this point and the glTexImage2D() call. So the content gets trashed before inBytes is consumed by the glTexImage2D() call.
The easiest way to fix this is to move the rgbaBytes declaration outside the if-statement:
unsigned char rgbaBytes[inWidth * inHeight * 4];
if (myAttemptedFix) {
inBytes = &rgbaBytes[0];
}
You'll probably want to make the code structure a little nicer once you have this all figured out, but this should at least make it functional.
I'm having some weird memory issues in a C program I'm writing, and I think something related to my texture loading system is the cause.
The problem is that, depending on how many textures I make, different issues start coming up. Less textures tend to ever so slightly change other variables in the program. If I include all the textures I want to include, the program may spit out a host of different "* glibc detected *" type errors, and occasionally a Segmentation Fault.
The kicker is that occasionally, the program works perfectly. It's all the luck of the draw.
My code is pretty heavy at this point, so I'll just post what I believe to be the relevant parts of it.
d_newTexture(d_loadBMP("resources/sprites/default.bmp"), &textures);
Is the function I call to load a texture into OpenGL. "textures" is a variable of type texMan_t, which is a struct I made.
typedef struct {
GLuint texID[500];
int texInc;
} texMan_t;
The idea is that texMan_t encompasses all your texture IDs for easier use. texInc just keeps track of what the next available member of texID is.
This is d_newTexture:
void d_newTexture(imgInfo_t info, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &tex->texID[tex->texInc]);
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height, GL_RGBA, GL_UNSIGNED_BYTE, info.data );
tex->texInc++;
glDisable(GL_TEXTURE_2D);
}
I also use a function by the name of d_newTextures, which is identical to d_newTexture, except for that it splits up a simple sprite sheet into multiple textures.
void d_newTextures(imgInfo_t info, int count, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(count, &tex->texID[tex->texInc]);
for(int i=0; i<count; i++) {
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc+i]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height/count,
GL_RGBA, GL_UNSIGNED_BYTE, &info.data[info.width*(info.height/count)*4*i] );
}
tex->texInc+=count;
glDisable(GL_TEXTURE_2D);
}
What could be the cause of the issues I'm seeing?
EDIT: Recently, I've also been getting the error "* glibc detected out/PokeEngine: free(): invalid pointer: 0x01010101 **" after closing the program as well, assuming it's able to properly begin. The backtrace looks like this:
/lib/i386-linux-gnu/libc.so.6(+0x75ee2)[0xceeee2]
/usr/lib/nvidia-173/libGLcore.so.1(+0x277c7c)[0x109ac7c]
EDIT 2:
Here's the code for d_loadBMP as well. Hope it helps!
imgInfo_t d_loadBMP(char* filename) {
imgInfo_t out;
FILE * bmpFile;
bmpFile = fopen(filename, "r");
if(bmpFile == NULL) {
printf("ERROR: Texture file not found!\n");
}
bmp_sign bmpSig;
bmp_fHeader bmpFileHeader;
bmp_iHeader bmpInfoHeader;
fread(&bmpSig, sizeof(bmp_sign), 1, bmpFile);
fread(&bmpFileHeader, sizeof(bmp_fHeader), 1, bmpFile);
fread(&bmpInfoHeader, sizeof(bmp_iHeader), 1, bmpFile);
out.width = bmpInfoHeader.width;
out.height = bmpInfoHeader.height;
out.size = bmpInfoHeader.imageSize;
out.data = (char*)malloc(sizeof(char)*out.width*out.height*4);
// Loaded backwards because that's how BMPs are stored
for(int i=out.width*out.height*4; i>0; i-=4) {
fread(&out.data[i+2], sizeof(char), 1, bmpFile);
fread(&out.data[i+1], sizeof(char), 1, bmpFile);
fread(&out.data[i], sizeof(char), 1, bmpFile);
out.data[i+3] = 255;
}
return out;
}
The way you're loading BMP files is wrong. You're reading right into structs, which is very unreliable, because the memory layout your compiler chooses for a struct may vastly differ from the data layout in a file. Also your code contains zero error checks. If I had to make an educated guess I'd say this is where your problems are.
BTW. glEnable(GL_TEXTURE_…) enables a texture target as data source for rendering. It's completely unnecessary for just generating and uploading textures. You can omit the bracing glEnable(GL_TEXTURE_2D); … glDisable(GL_TEXTURE_2D) blocks in your loading code. Also I'd not use gluBuildMipmaps2D – it doesn't support arbitrary texture dimensions, and you're disabling mipmapping anyway – and just upload directly with glTexImage2D.
Also I don't get your need for a texture manager. Or at least not why your texture manager looks like this. A much better approach would be using a hash map file path → texture ID and a reference count.
I am working with opengl and glsl, in visual studio c++ 2010. I am writing shaders and I need
to load a texture. I am reading code from a book and in there they load textures with Qt, but I
need to do it with DevIl, can someone please write the equivalent code for texture loading with DevIL? I am new to DevIL and I don't know how to translate this.
// Load texture file
const char * texName = "texture/brick1.jpg";
QImage timg = QGLWidget::convertToGLFormat(QImage(texName,"JPG"));
// Copy file to OpenGL
glActiveTexture(GL_TEXTURE0);
GLuint tid;
glGenTextures(1, &tid);
glBindTexture(GL_TEXTURE_2D, tid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, timg.width(), timg.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, timg.bits());
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Given that DevIL is no longer maintained, and the ILUT part assumes the requirement for power-of-2 texture dimensions and does rescale the images in its convenience functions, it actually makes sense to take the detour of doing it manually.
First loading a image from a file with DevIL happens quite similar to loading a texture from an image in OpenGL. First you create a DevIL image name and bind it
GLuint loadImageToTexture(char const * const thefilename)
{
ILuint imageID;
ilGenImages(1, &imageID);
ilBindImage(imageID);
now you can load an image from a file
ilLoadImage(thefilename);
check that the image does offer data, if not so, clean up
void data = ilGetData();
if(!data) {
ilBindImage(0);
ilDeleteImages(1, &imageID);
return 0;
}
retrieve the important parameters
int const width = ilGetInteger(IL_IMAGE_WIDTH);
int const height = ilGetInteger(IL_IMAGE_HEIGHT);
int const type = ilGetInteger(IL_IMAGE_TYPE); // matches OpenGL
int const format = ilGetInteger(IL_IMAGE_FORMAT); // matches OpenGL
Generate a texture name
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
next we set the pixel store paremeters (your original code missed that crucial step)
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0); // rows are tightly packed
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // pixels are tightly packed
finally we can upload the texture image and return the ID
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, type, data);
next, for convenience we set the minification filter to GL_LINEAR, so that we don't have to supply mipmap levels.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
finally return the textureID
return textureID;
}
If you want to use mipmapping you can use the OpenGL glGenerateMipmap later on; use glTexParameter GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOD to control the span of the image pyramid generated.
Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture